00:00:00.000 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 1038 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3705 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.112 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.112 The recommended git tool is: git 00:00:00.112 using credential 00000000-0000-0000-0000-000000000002 00:00:00.114 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.137 Fetching changes from the remote Git repository 00:00:00.146 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.176 Using shallow fetch with depth 1 00:00:00.176 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.176 > git --version # timeout=10 00:00:00.203 > git --version # 'git version 2.39.2' 00:00:00.203 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.219 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.219 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.940 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.950 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.973 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.973 > git config core.sparsecheckout # timeout=10 00:00:06.984 > git read-tree -mu HEAD # timeout=10 00:00:07.001 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.025 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.025 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.116 [Pipeline] Start of Pipeline 00:00:07.129 [Pipeline] library 00:00:07.131 Loading library shm_lib@master 00:00:07.131 Library shm_lib@master is cached. Copying from home. 00:00:07.149 [Pipeline] node 00:00:07.158 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:07.160 [Pipeline] { 00:00:07.170 [Pipeline] catchError 00:00:07.171 [Pipeline] { 00:00:07.183 [Pipeline] wrap 00:00:07.194 [Pipeline] { 00:00:07.201 [Pipeline] stage 00:00:07.203 [Pipeline] { (Prologue) 00:00:07.218 [Pipeline] echo 00:00:07.219 Node: VM-host-SM0 00:00:07.223 [Pipeline] cleanWs 00:00:07.232 [WS-CLEANUP] Deleting project workspace... 00:00:07.232 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.238 [WS-CLEANUP] done 00:00:07.413 [Pipeline] setCustomBuildProperty 00:00:07.497 [Pipeline] httpRequest 00:00:08.392 [Pipeline] echo 00:00:08.394 Sorcerer 10.211.164.20 is alive 00:00:08.405 [Pipeline] retry 00:00:08.407 [Pipeline] { 00:00:08.422 [Pipeline] httpRequest 00:00:08.426 HttpMethod: GET 00:00:08.427 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.427 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.444 Response Code: HTTP/1.1 200 OK 00:00:08.444 Success: Status code 200 is in the accepted range: 200,404 00:00:08.445 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.642 [Pipeline] } 00:00:11.659 [Pipeline] // retry 00:00:11.667 [Pipeline] sh 00:00:11.949 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.966 [Pipeline] httpRequest 00:00:12.357 [Pipeline] echo 00:00:12.359 Sorcerer 10.211.164.20 is alive 00:00:12.369 [Pipeline] retry 00:00:12.371 [Pipeline] { 00:00:12.385 [Pipeline] httpRequest 00:00:12.391 HttpMethod: GET 00:00:12.391 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:12.392 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:12.414 Response Code: HTTP/1.1 200 OK 00:00:12.415 Success: Status code 200 is in the accepted range: 200,404 00:00:12.415 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:09.907 [Pipeline] } 00:01:09.926 [Pipeline] // retry 00:01:09.935 [Pipeline] sh 00:01:10.224 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:12.770 [Pipeline] sh 00:01:13.051 + git -C spdk log --oneline -n5 00:01:13.051 c13c99a5e test: Various fixes for Fedora40 00:01:13.051 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:13.051 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:13.051 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:13.051 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:13.070 [Pipeline] withCredentials 00:01:13.080 > git --version # timeout=10 00:01:13.093 > git --version # 'git version 2.39.2' 00:01:13.108 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:13.111 [Pipeline] { 00:01:13.120 [Pipeline] retry 00:01:13.122 [Pipeline] { 00:01:13.138 [Pipeline] sh 00:01:13.419 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:13.432 [Pipeline] } 00:01:13.455 [Pipeline] // retry 00:01:13.463 [Pipeline] } 00:01:13.483 [Pipeline] // withCredentials 00:01:13.493 [Pipeline] httpRequest 00:01:13.861 [Pipeline] echo 00:01:13.863 Sorcerer 10.211.164.20 is alive 00:01:13.872 [Pipeline] retry 00:01:13.874 [Pipeline] { 00:01:13.888 [Pipeline] httpRequest 00:01:13.892 HttpMethod: GET 00:01:13.893 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:13.893 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:13.901 Response Code: HTTP/1.1 200 OK 00:01:13.902 Success: Status code 200 is in the accepted range: 200,404 00:01:13.902 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:20.991 [Pipeline] } 00:01:21.011 [Pipeline] // retry 00:01:21.019 [Pipeline] sh 00:01:21.300 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:22.686 [Pipeline] sh 00:01:22.966 + git -C dpdk log --oneline -n5 00:01:22.966 eeb0605f11 version: 23.11.0 00:01:22.966 238778122a doc: update release notes for 23.11 00:01:22.966 46aa6b3cfc doc: fix description of RSS features 00:01:22.966 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:22.966 7e421ae345 devtools: support skipping forbid rule check 00:01:22.982 [Pipeline] writeFile 00:01:22.998 [Pipeline] sh 00:01:23.278 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:23.288 [Pipeline] sh 00:01:23.567 + cat autorun-spdk.conf 00:01:23.567 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.567 SPDK_TEST_NVMF=1 00:01:23.567 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.567 SPDK_TEST_USDT=1 00:01:23.567 SPDK_RUN_UBSAN=1 00:01:23.567 SPDK_TEST_NVMF_MDNS=1 00:01:23.567 NET_TYPE=virt 00:01:23.567 SPDK_JSONRPC_GO_CLIENT=1 00:01:23.567 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:23.567 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:23.567 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:23.574 RUN_NIGHTLY=1 00:01:23.575 [Pipeline] } 00:01:23.588 [Pipeline] // stage 00:01:23.602 [Pipeline] stage 00:01:23.604 [Pipeline] { (Run VM) 00:01:23.617 [Pipeline] sh 00:01:23.896 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:23.896 + echo 'Start stage prepare_nvme.sh' 00:01:23.896 Start stage prepare_nvme.sh 00:01:23.896 + [[ -n 4 ]] 00:01:23.896 + disk_prefix=ex4 00:01:23.896 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:23.896 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:23.896 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:23.896 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.896 ++ SPDK_TEST_NVMF=1 00:01:23.896 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.896 ++ SPDK_TEST_USDT=1 00:01:23.896 ++ SPDK_RUN_UBSAN=1 00:01:23.896 ++ SPDK_TEST_NVMF_MDNS=1 00:01:23.896 ++ NET_TYPE=virt 00:01:23.896 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:23.896 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:23.896 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:23.896 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:23.896 ++ RUN_NIGHTLY=1 00:01:23.896 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:23.896 + nvme_files=() 00:01:23.896 + declare -A nvme_files 00:01:23.896 + backend_dir=/var/lib/libvirt/images/backends 00:01:23.896 + nvme_files['nvme.img']=5G 00:01:23.896 + nvme_files['nvme-cmb.img']=5G 00:01:23.896 + nvme_files['nvme-multi0.img']=4G 00:01:23.896 + nvme_files['nvme-multi1.img']=4G 00:01:23.896 + nvme_files['nvme-multi2.img']=4G 00:01:23.896 + nvme_files['nvme-openstack.img']=8G 00:01:23.896 + nvme_files['nvme-zns.img']=5G 00:01:23.896 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:23.896 + (( SPDK_TEST_FTL == 1 )) 00:01:23.896 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:23.896 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:23.896 + for nvme in "${!nvme_files[@]}" 00:01:23.896 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:23.896 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:23.896 + for nvme in "${!nvme_files[@]}" 00:01:23.896 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:23.896 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:23.896 + for nvme in "${!nvme_files[@]}" 00:01:23.896 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:23.896 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:23.896 + for nvme in "${!nvme_files[@]}" 00:01:23.897 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:23.897 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:23.897 + for nvme in "${!nvme_files[@]}" 00:01:23.897 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:23.897 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:23.897 + for nvme in "${!nvme_files[@]}" 00:01:23.897 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:24.155 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:24.155 + for nvme in "${!nvme_files[@]}" 00:01:24.155 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:24.155 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:24.155 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:24.155 + echo 'End stage prepare_nvme.sh' 00:01:24.155 End stage prepare_nvme.sh 00:01:24.168 [Pipeline] sh 00:01:24.444 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:24.444 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:01:24.444 00:01:24.444 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:24.444 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:24.444 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:24.444 HELP=0 00:01:24.444 DRY_RUN=0 00:01:24.444 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:01:24.444 NVME_DISKS_TYPE=nvme,nvme, 00:01:24.444 NVME_AUTO_CREATE=0 00:01:24.444 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:01:24.444 NVME_CMB=,, 00:01:24.444 NVME_PMR=,, 00:01:24.444 NVME_ZNS=,, 00:01:24.444 NVME_MS=,, 00:01:24.444 NVME_FDP=,, 00:01:24.444 SPDK_VAGRANT_DISTRO=fedora39 00:01:24.444 SPDK_VAGRANT_VMCPU=10 00:01:24.444 SPDK_VAGRANT_VMRAM=12288 00:01:24.444 SPDK_VAGRANT_PROVIDER=libvirt 00:01:24.444 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:24.444 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:24.444 SPDK_OPENSTACK_NETWORK=0 00:01:24.444 VAGRANT_PACKAGE_BOX=0 00:01:24.445 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:24.445 FORCE_DISTRO=true 00:01:24.445 VAGRANT_BOX_VERSION= 00:01:24.445 EXTRA_VAGRANTFILES= 00:01:24.445 NIC_MODEL=e1000 00:01:24.445 00:01:24.445 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:24.445 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:27.729 Bringing machine 'default' up with 'libvirt' provider... 00:01:27.987 ==> default: Creating image (snapshot of base box volume). 00:01:28.246 ==> default: Creating domain with the following settings... 00:01:28.246 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733557958_dfeec561989b8ab9389a 00:01:28.246 ==> default: -- Domain type: kvm 00:01:28.246 ==> default: -- Cpus: 10 00:01:28.246 ==> default: -- Feature: acpi 00:01:28.246 ==> default: -- Feature: apic 00:01:28.246 ==> default: -- Feature: pae 00:01:28.246 ==> default: -- Memory: 12288M 00:01:28.246 ==> default: -- Memory Backing: hugepages: 00:01:28.246 ==> default: -- Management MAC: 00:01:28.246 ==> default: -- Loader: 00:01:28.246 ==> default: -- Nvram: 00:01:28.246 ==> default: -- Base box: spdk/fedora39 00:01:28.246 ==> default: -- Storage pool: default 00:01:28.246 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733557958_dfeec561989b8ab9389a.img (20G) 00:01:28.246 ==> default: -- Volume Cache: default 00:01:28.246 ==> default: -- Kernel: 00:01:28.246 ==> default: -- Initrd: 00:01:28.246 ==> default: -- Graphics Type: vnc 00:01:28.246 ==> default: -- Graphics Port: -1 00:01:28.246 ==> default: -- Graphics IP: 127.0.0.1 00:01:28.246 ==> default: -- Graphics Password: Not defined 00:01:28.246 ==> default: -- Video Type: cirrus 00:01:28.246 ==> default: -- Video VRAM: 9216 00:01:28.246 ==> default: -- Sound Type: 00:01:28.246 ==> default: -- Keymap: en-us 00:01:28.246 ==> default: -- TPM Path: 00:01:28.246 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:28.246 ==> default: -- Command line args: 00:01:28.246 ==> default: -> value=-device, 00:01:28.246 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:28.246 ==> default: -> value=-drive, 00:01:28.246 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:28.246 ==> default: -> value=-device, 00:01:28.246 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.246 ==> default: -> value=-device, 00:01:28.246 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:28.246 ==> default: -> value=-drive, 00:01:28.246 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:28.246 ==> default: -> value=-device, 00:01:28.246 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.246 ==> default: -> value=-drive, 00:01:28.246 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:28.246 ==> default: -> value=-device, 00:01:28.246 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.246 ==> default: -> value=-drive, 00:01:28.246 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:28.246 ==> default: -> value=-device, 00:01:28.246 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.505 ==> default: Creating shared folders metadata... 00:01:28.505 ==> default: Starting domain. 00:01:30.409 ==> default: Waiting for domain to get an IP address... 00:01:48.489 ==> default: Waiting for SSH to become available... 00:01:49.423 ==> default: Configuring and enabling network interfaces... 00:01:54.683 default: SSH address: 192.168.121.109:22 00:01:54.683 default: SSH username: vagrant 00:01:54.683 default: SSH auth method: private key 00:01:56.057 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:04.169 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:09.499 ==> default: Mounting SSHFS shared folder... 00:02:10.873 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:10.873 ==> default: Checking Mount.. 00:02:12.248 ==> default: Folder Successfully Mounted! 00:02:12.248 ==> default: Running provisioner: file... 00:02:13.183 default: ~/.gitconfig => .gitconfig 00:02:13.442 00:02:13.442 SUCCESS! 00:02:13.442 00:02:13.442 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:13.442 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:13.442 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:13.442 00:02:13.450 [Pipeline] } 00:02:13.465 [Pipeline] // stage 00:02:13.474 [Pipeline] dir 00:02:13.474 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:02:13.476 [Pipeline] { 00:02:13.488 [Pipeline] catchError 00:02:13.489 [Pipeline] { 00:02:13.504 [Pipeline] sh 00:02:13.784 + vagrant ssh-config --host vagrant 00:02:13.784 + sed -ne /^Host/,$p 00:02:13.784 + tee ssh_conf 00:02:17.063 Host vagrant 00:02:17.063 HostName 192.168.121.109 00:02:17.063 User vagrant 00:02:17.063 Port 22 00:02:17.063 UserKnownHostsFile /dev/null 00:02:17.063 StrictHostKeyChecking no 00:02:17.063 PasswordAuthentication no 00:02:17.063 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:17.063 IdentitiesOnly yes 00:02:17.063 LogLevel FATAL 00:02:17.063 ForwardAgent yes 00:02:17.063 ForwardX11 yes 00:02:17.063 00:02:17.075 [Pipeline] withEnv 00:02:17.077 [Pipeline] { 00:02:17.089 [Pipeline] sh 00:02:17.366 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:17.366 source /etc/os-release 00:02:17.366 [[ -e /image.version ]] && img=$(< /image.version) 00:02:17.366 # Minimal, systemd-like check. 00:02:17.366 if [[ -e /.dockerenv ]]; then 00:02:17.366 # Clear garbage from the node's name: 00:02:17.366 # agt-er_autotest_547-896 -> autotest_547-896 00:02:17.366 # $HOSTNAME is the actual container id 00:02:17.366 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:17.366 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:17.366 # We can assume this is a mount from a host where container is running, 00:02:17.366 # so fetch its hostname to easily identify the target swarm worker. 00:02:17.366 container="$(< /etc/hostname) ($agent)" 00:02:17.366 else 00:02:17.366 # Fallback 00:02:17.366 container=$agent 00:02:17.366 fi 00:02:17.366 fi 00:02:17.366 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:17.366 00:02:17.635 [Pipeline] } 00:02:17.652 [Pipeline] // withEnv 00:02:17.661 [Pipeline] setCustomBuildProperty 00:02:17.675 [Pipeline] stage 00:02:17.677 [Pipeline] { (Tests) 00:02:17.695 [Pipeline] sh 00:02:17.978 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:18.247 [Pipeline] sh 00:02:18.525 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:18.796 [Pipeline] timeout 00:02:18.796 Timeout set to expire in 1 hr 0 min 00:02:18.797 [Pipeline] { 00:02:18.811 [Pipeline] sh 00:02:19.090 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:19.655 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:19.667 [Pipeline] sh 00:02:19.942 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:20.211 [Pipeline] sh 00:02:20.487 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:20.760 [Pipeline] sh 00:02:21.038 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:21.295 ++ readlink -f spdk_repo 00:02:21.295 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:21.295 + [[ -n /home/vagrant/spdk_repo ]] 00:02:21.295 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:21.295 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:21.295 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:21.295 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:21.295 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:21.295 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:21.295 + cd /home/vagrant/spdk_repo 00:02:21.295 + source /etc/os-release 00:02:21.295 ++ NAME='Fedora Linux' 00:02:21.295 ++ VERSION='39 (Cloud Edition)' 00:02:21.295 ++ ID=fedora 00:02:21.295 ++ VERSION_ID=39 00:02:21.295 ++ VERSION_CODENAME= 00:02:21.295 ++ PLATFORM_ID=platform:f39 00:02:21.295 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:21.295 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:21.295 ++ LOGO=fedora-logo-icon 00:02:21.295 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:21.295 ++ HOME_URL=https://fedoraproject.org/ 00:02:21.295 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:21.295 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:21.296 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:21.296 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:21.296 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:21.296 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:21.296 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:21.296 ++ SUPPORT_END=2024-11-12 00:02:21.296 ++ VARIANT='Cloud Edition' 00:02:21.296 ++ VARIANT_ID=cloud 00:02:21.296 + uname -a 00:02:21.296 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:21.296 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:21.296 Hugepages 00:02:21.296 node hugesize free / total 00:02:21.296 node0 1048576kB 0 / 0 00:02:21.296 node0 2048kB 0 / 0 00:02:21.296 00:02:21.296 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:21.296 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:21.296 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:21.296 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:21.554 + rm -f /tmp/spdk-ld-path 00:02:21.554 + source autorun-spdk.conf 00:02:21.554 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:21.554 ++ SPDK_TEST_NVMF=1 00:02:21.554 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:21.554 ++ SPDK_TEST_USDT=1 00:02:21.554 ++ SPDK_RUN_UBSAN=1 00:02:21.554 ++ SPDK_TEST_NVMF_MDNS=1 00:02:21.554 ++ NET_TYPE=virt 00:02:21.554 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:21.554 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:21.554 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:21.554 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:21.554 ++ RUN_NIGHTLY=1 00:02:21.554 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:21.554 + [[ -n '' ]] 00:02:21.554 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:21.554 + for M in /var/spdk/build-*-manifest.txt 00:02:21.554 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:21.554 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:21.554 + for M in /var/spdk/build-*-manifest.txt 00:02:21.554 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:21.554 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:21.554 + for M in /var/spdk/build-*-manifest.txt 00:02:21.554 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:21.554 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:21.554 ++ uname 00:02:21.554 + [[ Linux == \L\i\n\u\x ]] 00:02:21.554 + sudo dmesg -T 00:02:21.554 + sudo dmesg --clear 00:02:21.554 + dmesg_pid=5967 00:02:21.554 + sudo dmesg -Tw 00:02:21.554 + [[ Fedora Linux == FreeBSD ]] 00:02:21.554 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:21.554 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:21.554 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:21.554 + [[ -x /usr/src/fio-static/fio ]] 00:02:21.554 + export FIO_BIN=/usr/src/fio-static/fio 00:02:21.554 + FIO_BIN=/usr/src/fio-static/fio 00:02:21.554 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:21.554 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:21.554 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:21.554 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:21.554 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:21.554 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:21.554 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:21.554 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:21.554 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:21.554 Test configuration: 00:02:21.554 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:21.554 SPDK_TEST_NVMF=1 00:02:21.554 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:21.554 SPDK_TEST_USDT=1 00:02:21.554 SPDK_RUN_UBSAN=1 00:02:21.554 SPDK_TEST_NVMF_MDNS=1 00:02:21.554 NET_TYPE=virt 00:02:21.554 SPDK_JSONRPC_GO_CLIENT=1 00:02:21.554 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:21.554 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:21.554 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:21.554 RUN_NIGHTLY=1 07:53:32 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:21.554 07:53:32 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:21.554 07:53:32 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:21.554 07:53:32 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:21.554 07:53:32 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:21.554 07:53:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.554 07:53:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.554 07:53:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.554 07:53:32 -- paths/export.sh@5 -- $ export PATH 00:02:21.554 07:53:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:21.554 07:53:32 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:21.554 07:53:32 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:21.554 07:53:32 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733558012.XXXXXX 00:02:21.554 07:53:32 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733558012.xgOpVG 00:02:21.554 07:53:32 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:21.554 07:53:32 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:02:21.554 07:53:32 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:21.555 07:53:32 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:21.555 07:53:32 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:21.555 07:53:32 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:21.555 07:53:32 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:21.555 07:53:32 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:21.555 07:53:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.824 07:53:32 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:02:21.824 07:53:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:21.824 07:53:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:21.824 07:53:32 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:21.824 07:53:32 -- spdk/autobuild.sh@16 -- $ date -u 00:02:21.824 Sat Dec 7 07:53:32 AM UTC 2024 00:02:21.824 07:53:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:21.824 LTS-67-gc13c99a5e 00:02:21.824 07:53:32 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:21.824 07:53:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:21.824 07:53:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:21.824 07:53:32 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:21.824 07:53:32 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:21.824 07:53:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.824 ************************************ 00:02:21.824 START TEST ubsan 00:02:21.824 ************************************ 00:02:21.824 using ubsan 00:02:21.824 07:53:32 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:21.824 00:02:21.824 real 0m0.000s 00:02:21.824 user 0m0.000s 00:02:21.824 sys 0m0.000s 00:02:21.824 07:53:32 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:21.824 07:53:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.824 ************************************ 00:02:21.824 END TEST ubsan 00:02:21.824 ************************************ 00:02:21.824 07:53:32 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:21.824 07:53:32 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:21.824 07:53:32 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:21.824 07:53:32 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:21.824 07:53:32 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:21.824 07:53:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.824 ************************************ 00:02:21.824 START TEST build_native_dpdk 00:02:21.824 ************************************ 00:02:21.824 07:53:32 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:21.824 07:53:32 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:21.824 07:53:32 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:21.824 07:53:32 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:21.824 07:53:32 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:21.824 07:53:32 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:21.824 07:53:32 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:21.824 07:53:32 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:21.824 07:53:32 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:21.824 07:53:32 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:21.824 07:53:32 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:21.824 07:53:32 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:21.824 07:53:32 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:21.824 07:53:32 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:21.824 07:53:32 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:21.824 07:53:32 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:21.824 07:53:32 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:21.824 07:53:32 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:21.824 07:53:32 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:21.824 07:53:32 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:21.824 07:53:32 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:21.824 eeb0605f11 version: 23.11.0 00:02:21.824 238778122a doc: update release notes for 23.11 00:02:21.824 46aa6b3cfc doc: fix description of RSS features 00:02:21.824 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:21.824 7e421ae345 devtools: support skipping forbid rule check 00:02:21.824 07:53:32 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:21.825 07:53:32 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:21.825 07:53:32 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:21.825 07:53:32 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:21.825 07:53:32 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:21.825 07:53:32 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:21.825 07:53:32 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:21.825 07:53:32 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:21.825 07:53:32 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:21.825 07:53:32 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:21.825 07:53:32 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:21.825 07:53:32 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:21.825 07:53:32 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:21.825 07:53:32 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:21.825 07:53:32 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:21.825 07:53:32 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:21.825 07:53:32 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:21.825 07:53:32 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:21.825 07:53:32 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:21.825 07:53:32 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:21.825 07:53:32 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:21.825 07:53:32 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:21.825 07:53:32 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:21.825 07:53:32 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:21.825 07:53:32 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:21.825 07:53:32 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:21.825 07:53:32 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:21.825 07:53:32 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:21.825 07:53:32 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:21.825 07:53:32 -- scripts/common.sh@343 -- $ case "$op" in 00:02:21.825 07:53:32 -- scripts/common.sh@344 -- $ : 1 00:02:21.825 07:53:32 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:21.825 07:53:32 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:21.825 07:53:32 -- scripts/common.sh@364 -- $ decimal 23 00:02:21.825 07:53:32 -- scripts/common.sh@352 -- $ local d=23 00:02:21.825 07:53:32 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:21.825 07:53:32 -- scripts/common.sh@354 -- $ echo 23 00:02:21.825 07:53:32 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:21.825 07:53:32 -- scripts/common.sh@365 -- $ decimal 21 00:02:21.825 07:53:32 -- scripts/common.sh@352 -- $ local d=21 00:02:21.825 07:53:32 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:21.825 07:53:32 -- scripts/common.sh@354 -- $ echo 21 00:02:21.825 07:53:32 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:21.825 07:53:32 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:21.825 07:53:32 -- scripts/common.sh@366 -- $ return 1 00:02:21.825 07:53:32 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:21.825 patching file config/rte_config.h 00:02:21.825 Hunk #1 succeeded at 60 (offset 1 line). 00:02:21.825 07:53:32 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:21.825 07:53:32 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:21.825 07:53:32 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:21.825 07:53:32 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:21.825 07:53:32 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:21.825 07:53:32 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:21.825 07:53:32 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:21.825 07:53:32 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:21.825 07:53:32 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:21.825 07:53:32 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:21.825 07:53:32 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:21.825 07:53:32 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:21.825 07:53:32 -- scripts/common.sh@343 -- $ case "$op" in 00:02:21.825 07:53:32 -- scripts/common.sh@344 -- $ : 1 00:02:21.825 07:53:32 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:21.825 07:53:32 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:21.825 07:53:32 -- scripts/common.sh@364 -- $ decimal 23 00:02:21.825 07:53:32 -- scripts/common.sh@352 -- $ local d=23 00:02:21.825 07:53:32 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:21.825 07:53:32 -- scripts/common.sh@354 -- $ echo 23 00:02:21.825 07:53:32 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:21.825 07:53:32 -- scripts/common.sh@365 -- $ decimal 24 00:02:21.825 07:53:32 -- scripts/common.sh@352 -- $ local d=24 00:02:21.825 07:53:32 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:21.825 07:53:32 -- scripts/common.sh@354 -- $ echo 24 00:02:21.825 07:53:32 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:21.825 07:53:32 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:21.825 07:53:32 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:21.825 07:53:32 -- scripts/common.sh@367 -- $ return 0 00:02:21.825 07:53:32 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:21.825 patching file lib/pcapng/rte_pcapng.c 00:02:21.825 07:53:32 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:21.825 07:53:32 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:21.825 07:53:32 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:21.825 07:53:32 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:21.825 07:53:32 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:27.106 The Meson build system 00:02:27.106 Version: 1.5.0 00:02:27.106 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:27.106 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:27.106 Build type: native build 00:02:27.106 Program cat found: YES (/usr/bin/cat) 00:02:27.106 Project name: DPDK 00:02:27.106 Project version: 23.11.0 00:02:27.106 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:27.106 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:27.106 Host machine cpu family: x86_64 00:02:27.106 Host machine cpu: x86_64 00:02:27.106 Message: ## Building in Developer Mode ## 00:02:27.106 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:27.106 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:27.106 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:27.106 Program python3 found: YES (/usr/bin/python3) 00:02:27.106 Program cat found: YES (/usr/bin/cat) 00:02:27.106 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:27.106 Compiler for C supports arguments -march=native: YES 00:02:27.106 Checking for size of "void *" : 8 00:02:27.106 Checking for size of "void *" : 8 (cached) 00:02:27.106 Library m found: YES 00:02:27.106 Library numa found: YES 00:02:27.106 Has header "numaif.h" : YES 00:02:27.106 Library fdt found: NO 00:02:27.106 Library execinfo found: NO 00:02:27.106 Has header "execinfo.h" : YES 00:02:27.106 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:27.106 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:27.106 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:27.106 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:27.106 Run-time dependency openssl found: YES 3.1.1 00:02:27.106 Run-time dependency libpcap found: YES 1.10.4 00:02:27.106 Has header "pcap.h" with dependency libpcap: YES 00:02:27.106 Compiler for C supports arguments -Wcast-qual: YES 00:02:27.106 Compiler for C supports arguments -Wdeprecated: YES 00:02:27.106 Compiler for C supports arguments -Wformat: YES 00:02:27.106 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:27.106 Compiler for C supports arguments -Wformat-security: NO 00:02:27.106 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:27.106 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:27.106 Compiler for C supports arguments -Wnested-externs: YES 00:02:27.106 Compiler for C supports arguments -Wold-style-definition: YES 00:02:27.106 Compiler for C supports arguments -Wpointer-arith: YES 00:02:27.106 Compiler for C supports arguments -Wsign-compare: YES 00:02:27.106 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:27.106 Compiler for C supports arguments -Wundef: YES 00:02:27.106 Compiler for C supports arguments -Wwrite-strings: YES 00:02:27.106 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:27.106 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:27.106 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:27.106 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:27.106 Program objdump found: YES (/usr/bin/objdump) 00:02:27.106 Compiler for C supports arguments -mavx512f: YES 00:02:27.106 Checking if "AVX512 checking" compiles: YES 00:02:27.106 Fetching value of define "__SSE4_2__" : 1 00:02:27.106 Fetching value of define "__AES__" : 1 00:02:27.107 Fetching value of define "__AVX__" : 1 00:02:27.107 Fetching value of define "__AVX2__" : 1 00:02:27.107 Fetching value of define "__AVX512BW__" : (undefined) 00:02:27.107 Fetching value of define "__AVX512CD__" : (undefined) 00:02:27.107 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:27.107 Fetching value of define "__AVX512F__" : (undefined) 00:02:27.107 Fetching value of define "__AVX512VL__" : (undefined) 00:02:27.107 Fetching value of define "__PCLMUL__" : 1 00:02:27.107 Fetching value of define "__RDRND__" : 1 00:02:27.107 Fetching value of define "__RDSEED__" : 1 00:02:27.107 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:27.107 Fetching value of define "__znver1__" : (undefined) 00:02:27.107 Fetching value of define "__znver2__" : (undefined) 00:02:27.107 Fetching value of define "__znver3__" : (undefined) 00:02:27.107 Fetching value of define "__znver4__" : (undefined) 00:02:27.107 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:27.107 Message: lib/log: Defining dependency "log" 00:02:27.107 Message: lib/kvargs: Defining dependency "kvargs" 00:02:27.107 Message: lib/telemetry: Defining dependency "telemetry" 00:02:27.107 Checking for function "getentropy" : NO 00:02:27.107 Message: lib/eal: Defining dependency "eal" 00:02:27.107 Message: lib/ring: Defining dependency "ring" 00:02:27.107 Message: lib/rcu: Defining dependency "rcu" 00:02:27.107 Message: lib/mempool: Defining dependency "mempool" 00:02:27.107 Message: lib/mbuf: Defining dependency "mbuf" 00:02:27.107 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:27.107 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:27.107 Compiler for C supports arguments -mpclmul: YES 00:02:27.107 Compiler for C supports arguments -maes: YES 00:02:27.107 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:27.107 Compiler for C supports arguments -mavx512bw: YES 00:02:27.107 Compiler for C supports arguments -mavx512dq: YES 00:02:27.107 Compiler for C supports arguments -mavx512vl: YES 00:02:27.107 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:27.107 Compiler for C supports arguments -mavx2: YES 00:02:27.107 Compiler for C supports arguments -mavx: YES 00:02:27.107 Message: lib/net: Defining dependency "net" 00:02:27.107 Message: lib/meter: Defining dependency "meter" 00:02:27.107 Message: lib/ethdev: Defining dependency "ethdev" 00:02:27.107 Message: lib/pci: Defining dependency "pci" 00:02:27.107 Message: lib/cmdline: Defining dependency "cmdline" 00:02:27.107 Message: lib/metrics: Defining dependency "metrics" 00:02:27.107 Message: lib/hash: Defining dependency "hash" 00:02:27.107 Message: lib/timer: Defining dependency "timer" 00:02:27.107 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:27.107 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:27.107 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:27.107 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:27.107 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:27.107 Message: lib/acl: Defining dependency "acl" 00:02:27.107 Message: lib/bbdev: Defining dependency "bbdev" 00:02:27.107 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:27.107 Run-time dependency libelf found: YES 0.191 00:02:27.107 Message: lib/bpf: Defining dependency "bpf" 00:02:27.107 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:27.107 Message: lib/compressdev: Defining dependency "compressdev" 00:02:27.107 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:27.107 Message: lib/distributor: Defining dependency "distributor" 00:02:27.107 Message: lib/dmadev: Defining dependency "dmadev" 00:02:27.107 Message: lib/efd: Defining dependency "efd" 00:02:27.107 Message: lib/eventdev: Defining dependency "eventdev" 00:02:27.107 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:27.107 Message: lib/gpudev: Defining dependency "gpudev" 00:02:27.107 Message: lib/gro: Defining dependency "gro" 00:02:27.107 Message: lib/gso: Defining dependency "gso" 00:02:27.107 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:27.107 Message: lib/jobstats: Defining dependency "jobstats" 00:02:27.107 Message: lib/latencystats: Defining dependency "latencystats" 00:02:27.107 Message: lib/lpm: Defining dependency "lpm" 00:02:27.107 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:27.107 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:27.107 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:27.107 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:27.107 Message: lib/member: Defining dependency "member" 00:02:27.107 Message: lib/pcapng: Defining dependency "pcapng" 00:02:27.107 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:27.107 Message: lib/power: Defining dependency "power" 00:02:27.107 Message: lib/rawdev: Defining dependency "rawdev" 00:02:27.107 Message: lib/regexdev: Defining dependency "regexdev" 00:02:27.107 Message: lib/mldev: Defining dependency "mldev" 00:02:27.107 Message: lib/rib: Defining dependency "rib" 00:02:27.107 Message: lib/reorder: Defining dependency "reorder" 00:02:27.107 Message: lib/sched: Defining dependency "sched" 00:02:27.107 Message: lib/security: Defining dependency "security" 00:02:27.107 Message: lib/stack: Defining dependency "stack" 00:02:27.107 Has header "linux/userfaultfd.h" : YES 00:02:27.107 Has header "linux/vduse.h" : YES 00:02:27.107 Message: lib/vhost: Defining dependency "vhost" 00:02:27.107 Message: lib/ipsec: Defining dependency "ipsec" 00:02:27.107 Message: lib/pdcp: Defining dependency "pdcp" 00:02:27.107 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:27.107 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:27.107 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:27.107 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:27.107 Message: lib/fib: Defining dependency "fib" 00:02:27.107 Message: lib/port: Defining dependency "port" 00:02:27.107 Message: lib/pdump: Defining dependency "pdump" 00:02:27.107 Message: lib/table: Defining dependency "table" 00:02:27.107 Message: lib/pipeline: Defining dependency "pipeline" 00:02:27.107 Message: lib/graph: Defining dependency "graph" 00:02:27.107 Message: lib/node: Defining dependency "node" 00:02:27.107 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:29.010 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:29.010 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:29.010 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:29.010 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:29.010 Compiler for C supports arguments -Wno-unused-value: YES 00:02:29.010 Compiler for C supports arguments -Wno-format: YES 00:02:29.010 Compiler for C supports arguments -Wno-format-security: YES 00:02:29.010 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:29.010 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:29.010 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:29.010 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:29.010 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:29.010 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:29.010 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:29.010 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:29.010 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:29.010 Has header "sys/epoll.h" : YES 00:02:29.010 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:29.010 Configuring doxy-api-html.conf using configuration 00:02:29.010 Configuring doxy-api-man.conf using configuration 00:02:29.010 Program mandb found: YES (/usr/bin/mandb) 00:02:29.010 Program sphinx-build found: NO 00:02:29.010 Configuring rte_build_config.h using configuration 00:02:29.010 Message: 00:02:29.010 ================= 00:02:29.010 Applications Enabled 00:02:29.010 ================= 00:02:29.010 00:02:29.010 apps: 00:02:29.010 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:29.010 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:29.010 test-pmd, test-regex, test-sad, test-security-perf, 00:02:29.010 00:02:29.010 Message: 00:02:29.010 ================= 00:02:29.010 Libraries Enabled 00:02:29.010 ================= 00:02:29.010 00:02:29.010 libs: 00:02:29.010 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:29.010 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:29.010 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:29.010 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:29.010 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:29.010 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:29.010 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:29.010 00:02:29.010 00:02:29.010 Message: 00:02:29.010 =============== 00:02:29.010 Drivers Enabled 00:02:29.010 =============== 00:02:29.010 00:02:29.010 common: 00:02:29.010 00:02:29.010 bus: 00:02:29.010 pci, vdev, 00:02:29.010 mempool: 00:02:29.010 ring, 00:02:29.010 dma: 00:02:29.010 00:02:29.010 net: 00:02:29.010 i40e, 00:02:29.010 raw: 00:02:29.010 00:02:29.010 crypto: 00:02:29.010 00:02:29.010 compress: 00:02:29.010 00:02:29.010 regex: 00:02:29.010 00:02:29.010 ml: 00:02:29.010 00:02:29.010 vdpa: 00:02:29.010 00:02:29.010 event: 00:02:29.010 00:02:29.010 baseband: 00:02:29.010 00:02:29.010 gpu: 00:02:29.010 00:02:29.010 00:02:29.010 Message: 00:02:29.010 ================= 00:02:29.010 Content Skipped 00:02:29.010 ================= 00:02:29.010 00:02:29.010 apps: 00:02:29.010 00:02:29.010 libs: 00:02:29.010 00:02:29.010 drivers: 00:02:29.010 common/cpt: not in enabled drivers build config 00:02:29.010 common/dpaax: not in enabled drivers build config 00:02:29.010 common/iavf: not in enabled drivers build config 00:02:29.010 common/idpf: not in enabled drivers build config 00:02:29.010 common/mvep: not in enabled drivers build config 00:02:29.010 common/octeontx: not in enabled drivers build config 00:02:29.010 bus/auxiliary: not in enabled drivers build config 00:02:29.010 bus/cdx: not in enabled drivers build config 00:02:29.010 bus/dpaa: not in enabled drivers build config 00:02:29.010 bus/fslmc: not in enabled drivers build config 00:02:29.010 bus/ifpga: not in enabled drivers build config 00:02:29.010 bus/platform: not in enabled drivers build config 00:02:29.010 bus/vmbus: not in enabled drivers build config 00:02:29.010 common/cnxk: not in enabled drivers build config 00:02:29.010 common/mlx5: not in enabled drivers build config 00:02:29.010 common/nfp: not in enabled drivers build config 00:02:29.010 common/qat: not in enabled drivers build config 00:02:29.010 common/sfc_efx: not in enabled drivers build config 00:02:29.010 mempool/bucket: not in enabled drivers build config 00:02:29.010 mempool/cnxk: not in enabled drivers build config 00:02:29.010 mempool/dpaa: not in enabled drivers build config 00:02:29.010 mempool/dpaa2: not in enabled drivers build config 00:02:29.010 mempool/octeontx: not in enabled drivers build config 00:02:29.011 mempool/stack: not in enabled drivers build config 00:02:29.011 dma/cnxk: not in enabled drivers build config 00:02:29.011 dma/dpaa: not in enabled drivers build config 00:02:29.011 dma/dpaa2: not in enabled drivers build config 00:02:29.011 dma/hisilicon: not in enabled drivers build config 00:02:29.011 dma/idxd: not in enabled drivers build config 00:02:29.011 dma/ioat: not in enabled drivers build config 00:02:29.011 dma/skeleton: not in enabled drivers build config 00:02:29.011 net/af_packet: not in enabled drivers build config 00:02:29.011 net/af_xdp: not in enabled drivers build config 00:02:29.011 net/ark: not in enabled drivers build config 00:02:29.011 net/atlantic: not in enabled drivers build config 00:02:29.011 net/avp: not in enabled drivers build config 00:02:29.011 net/axgbe: not in enabled drivers build config 00:02:29.011 net/bnx2x: not in enabled drivers build config 00:02:29.011 net/bnxt: not in enabled drivers build config 00:02:29.011 net/bonding: not in enabled drivers build config 00:02:29.011 net/cnxk: not in enabled drivers build config 00:02:29.011 net/cpfl: not in enabled drivers build config 00:02:29.011 net/cxgbe: not in enabled drivers build config 00:02:29.011 net/dpaa: not in enabled drivers build config 00:02:29.011 net/dpaa2: not in enabled drivers build config 00:02:29.011 net/e1000: not in enabled drivers build config 00:02:29.011 net/ena: not in enabled drivers build config 00:02:29.011 net/enetc: not in enabled drivers build config 00:02:29.011 net/enetfec: not in enabled drivers build config 00:02:29.011 net/enic: not in enabled drivers build config 00:02:29.011 net/failsafe: not in enabled drivers build config 00:02:29.011 net/fm10k: not in enabled drivers build config 00:02:29.011 net/gve: not in enabled drivers build config 00:02:29.011 net/hinic: not in enabled drivers build config 00:02:29.011 net/hns3: not in enabled drivers build config 00:02:29.011 net/iavf: not in enabled drivers build config 00:02:29.011 net/ice: not in enabled drivers build config 00:02:29.011 net/idpf: not in enabled drivers build config 00:02:29.011 net/igc: not in enabled drivers build config 00:02:29.011 net/ionic: not in enabled drivers build config 00:02:29.011 net/ipn3ke: not in enabled drivers build config 00:02:29.011 net/ixgbe: not in enabled drivers build config 00:02:29.011 net/mana: not in enabled drivers build config 00:02:29.011 net/memif: not in enabled drivers build config 00:02:29.011 net/mlx4: not in enabled drivers build config 00:02:29.011 net/mlx5: not in enabled drivers build config 00:02:29.011 net/mvneta: not in enabled drivers build config 00:02:29.011 net/mvpp2: not in enabled drivers build config 00:02:29.011 net/netvsc: not in enabled drivers build config 00:02:29.011 net/nfb: not in enabled drivers build config 00:02:29.011 net/nfp: not in enabled drivers build config 00:02:29.011 net/ngbe: not in enabled drivers build config 00:02:29.011 net/null: not in enabled drivers build config 00:02:29.011 net/octeontx: not in enabled drivers build config 00:02:29.011 net/octeon_ep: not in enabled drivers build config 00:02:29.011 net/pcap: not in enabled drivers build config 00:02:29.011 net/pfe: not in enabled drivers build config 00:02:29.011 net/qede: not in enabled drivers build config 00:02:29.011 net/ring: not in enabled drivers build config 00:02:29.011 net/sfc: not in enabled drivers build config 00:02:29.011 net/softnic: not in enabled drivers build config 00:02:29.011 net/tap: not in enabled drivers build config 00:02:29.011 net/thunderx: not in enabled drivers build config 00:02:29.011 net/txgbe: not in enabled drivers build config 00:02:29.011 net/vdev_netvsc: not in enabled drivers build config 00:02:29.011 net/vhost: not in enabled drivers build config 00:02:29.011 net/virtio: not in enabled drivers build config 00:02:29.011 net/vmxnet3: not in enabled drivers build config 00:02:29.011 raw/cnxk_bphy: not in enabled drivers build config 00:02:29.011 raw/cnxk_gpio: not in enabled drivers build config 00:02:29.011 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:29.011 raw/ifpga: not in enabled drivers build config 00:02:29.011 raw/ntb: not in enabled drivers build config 00:02:29.011 raw/skeleton: not in enabled drivers build config 00:02:29.011 crypto/armv8: not in enabled drivers build config 00:02:29.011 crypto/bcmfs: not in enabled drivers build config 00:02:29.011 crypto/caam_jr: not in enabled drivers build config 00:02:29.011 crypto/ccp: not in enabled drivers build config 00:02:29.011 crypto/cnxk: not in enabled drivers build config 00:02:29.011 crypto/dpaa_sec: not in enabled drivers build config 00:02:29.011 crypto/dpaa2_sec: not in enabled drivers build config 00:02:29.011 crypto/ipsec_mb: not in enabled drivers build config 00:02:29.011 crypto/mlx5: not in enabled drivers build config 00:02:29.011 crypto/mvsam: not in enabled drivers build config 00:02:29.011 crypto/nitrox: not in enabled drivers build config 00:02:29.011 crypto/null: not in enabled drivers build config 00:02:29.011 crypto/octeontx: not in enabled drivers build config 00:02:29.011 crypto/openssl: not in enabled drivers build config 00:02:29.011 crypto/scheduler: not in enabled drivers build config 00:02:29.011 crypto/uadk: not in enabled drivers build config 00:02:29.011 crypto/virtio: not in enabled drivers build config 00:02:29.011 compress/isal: not in enabled drivers build config 00:02:29.011 compress/mlx5: not in enabled drivers build config 00:02:29.011 compress/octeontx: not in enabled drivers build config 00:02:29.011 compress/zlib: not in enabled drivers build config 00:02:29.011 regex/mlx5: not in enabled drivers build config 00:02:29.011 regex/cn9k: not in enabled drivers build config 00:02:29.011 ml/cnxk: not in enabled drivers build config 00:02:29.011 vdpa/ifc: not in enabled drivers build config 00:02:29.011 vdpa/mlx5: not in enabled drivers build config 00:02:29.011 vdpa/nfp: not in enabled drivers build config 00:02:29.011 vdpa/sfc: not in enabled drivers build config 00:02:29.011 event/cnxk: not in enabled drivers build config 00:02:29.011 event/dlb2: not in enabled drivers build config 00:02:29.011 event/dpaa: not in enabled drivers build config 00:02:29.011 event/dpaa2: not in enabled drivers build config 00:02:29.011 event/dsw: not in enabled drivers build config 00:02:29.011 event/opdl: not in enabled drivers build config 00:02:29.011 event/skeleton: not in enabled drivers build config 00:02:29.011 event/sw: not in enabled drivers build config 00:02:29.011 event/octeontx: not in enabled drivers build config 00:02:29.011 baseband/acc: not in enabled drivers build config 00:02:29.011 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:29.011 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:29.011 baseband/la12xx: not in enabled drivers build config 00:02:29.011 baseband/null: not in enabled drivers build config 00:02:29.011 baseband/turbo_sw: not in enabled drivers build config 00:02:29.011 gpu/cuda: not in enabled drivers build config 00:02:29.011 00:02:29.011 00:02:29.011 Build targets in project: 220 00:02:29.011 00:02:29.011 DPDK 23.11.0 00:02:29.011 00:02:29.011 User defined options 00:02:29.011 libdir : lib 00:02:29.011 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:29.011 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:29.011 c_link_args : 00:02:29.011 enable_docs : false 00:02:29.011 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:29.011 enable_kmods : false 00:02:29.011 machine : native 00:02:29.011 tests : false 00:02:29.011 00:02:29.011 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:29.011 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:29.270 07:53:40 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:29.270 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:29.270 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:29.270 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:29.270 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:29.270 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:29.270 [5/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:29.270 [6/710] Linking static target lib/librte_kvargs.a 00:02:29.270 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:29.530 [8/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:29.530 [9/710] Linking static target lib/librte_log.a 00:02:29.530 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:29.530 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.789 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:29.789 [13/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.789 [14/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:29.789 [15/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:29.789 [16/710] Linking target lib/librte_log.so.24.0 00:02:30.049 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:30.049 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:30.049 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:30.308 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:30.308 [21/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:30.308 [22/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:30.308 [23/710] Linking target lib/librte_kvargs.so.24.0 00:02:30.308 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:30.308 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:30.567 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:30.567 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:30.567 [28/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:30.567 [29/710] Linking static target lib/librte_telemetry.a 00:02:30.567 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:30.567 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:30.826 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:30.826 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:30.826 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:30.826 [35/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.085 [36/710] Linking target lib/librte_telemetry.so.24.0 00:02:31.085 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:31.085 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:31.085 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:31.085 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:31.085 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:31.085 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:31.085 [43/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:31.085 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:31.345 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:31.345 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:31.605 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:31.605 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:31.605 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:31.605 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:31.605 [51/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:31.864 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:31.864 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:31.864 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:31.864 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:32.124 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:32.124 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:32.124 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:32.124 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:32.124 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:32.124 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:32.124 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:32.124 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:32.383 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:32.383 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:32.383 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:32.383 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:32.642 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:32.642 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:32.901 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:32.901 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:32.901 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:32.901 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:32.901 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:32.901 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:32.901 [76/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:32.901 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:32.901 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:33.160 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:33.160 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:33.419 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:33.419 [82/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:33.419 [83/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:33.419 [84/710] Linking static target lib/librte_ring.a 00:02:33.419 [85/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:33.680 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:33.680 [87/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:33.680 [88/710] Linking static target lib/librte_eal.a 00:02:33.680 [89/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.941 [90/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:33.941 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:33.941 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:33.941 [93/710] Linking static target lib/librte_mempool.a 00:02:34.200 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:34.200 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:34.200 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:34.200 [97/710] Linking static target lib/librte_rcu.a 00:02:34.200 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:34.200 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:34.458 [100/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.458 [101/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:34.458 [102/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.458 [103/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:34.716 [104/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:34.716 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:34.716 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:34.716 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:34.716 [108/710] Linking static target lib/librte_mbuf.a 00:02:34.974 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:34.974 [110/710] Linking static target lib/librte_net.a 00:02:34.974 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:34.974 [112/710] Linking static target lib/librte_meter.a 00:02:35.232 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.232 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:35.232 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:35.232 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:35.232 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.489 [118/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.489 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:36.055 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:36.055 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:36.312 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:36.312 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:36.312 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:36.312 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:36.312 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:36.312 [127/710] Linking static target lib/librte_pci.a 00:02:36.312 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:36.570 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:36.570 [130/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.570 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:36.570 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:36.570 [133/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:36.828 [134/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:36.828 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:36.828 [136/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:36.828 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:36.828 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:36.828 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:36.828 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:37.085 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:37.085 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:37.085 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:37.085 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:37.085 [145/710] Linking static target lib/librte_cmdline.a 00:02:37.343 [146/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:37.601 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:37.601 [148/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:37.601 [149/710] Linking static target lib/librte_metrics.a 00:02:37.601 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:37.859 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.116 [152/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:38.116 [153/710] Linking static target lib/librte_timer.a 00:02:38.116 [154/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.116 [155/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:38.374 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.631 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:38.890 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:38.890 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:38.890 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:39.456 [161/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:39.456 [162/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:39.456 [163/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:39.456 [164/710] Linking static target lib/librte_bitratestats.a 00:02:39.714 [165/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:39.714 [166/710] Linking static target lib/librte_ethdev.a 00:02:39.714 [167/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.714 [168/710] Linking target lib/librte_eal.so.24.0 00:02:39.714 [169/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.714 [170/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:39.714 [171/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:39.714 [172/710] Linking static target lib/librte_bbdev.a 00:02:39.714 [173/710] Linking static target lib/librte_hash.a 00:02:39.972 [174/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:39.972 [175/710] Linking target lib/librte_ring.so.24.0 00:02:39.972 [176/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:39.972 [177/710] Linking target lib/librte_rcu.so.24.0 00:02:39.972 [178/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:40.231 [179/710] Linking target lib/librte_mempool.so.24.0 00:02:40.231 [180/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:40.231 [181/710] Linking target lib/librte_meter.so.24.0 00:02:40.231 [182/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:40.231 [183/710] Linking target lib/librte_pci.so.24.0 00:02:40.231 [184/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:40.231 [185/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:40.231 [186/710] Linking target lib/librte_mbuf.so.24.0 00:02:40.231 [187/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:40.231 [188/710] Linking static target lib/acl/libavx2_tmp.a 00:02:40.489 [189/710] Linking target lib/librte_timer.so.24.0 00:02:40.489 [190/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:40.489 [191/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.489 [192/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.489 [193/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:40.489 [194/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:40.489 [195/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:40.489 [196/710] Linking target lib/librte_net.so.24.0 00:02:40.489 [197/710] Linking target lib/librte_bbdev.so.24.0 00:02:40.489 [198/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:40.489 [199/710] Linking static target lib/acl/libavx512_tmp.a 00:02:40.748 [200/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:40.748 [201/710] Linking target lib/librte_cmdline.so.24.0 00:02:40.748 [202/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:40.748 [203/710] Linking target lib/librte_hash.so.24.0 00:02:40.748 [204/710] Linking static target lib/librte_acl.a 00:02:40.748 [205/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:41.007 [206/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:41.007 [207/710] Linking static target lib/librte_cfgfile.a 00:02:41.007 [208/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:41.007 [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.007 [210/710] Linking target lib/librte_acl.so.24.0 00:02:41.007 [211/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:41.266 [212/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:41.266 [213/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.266 [214/710] Linking target lib/librte_cfgfile.so.24.0 00:02:41.266 [215/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:41.266 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:41.525 [217/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:41.525 [218/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:41.783 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:41.783 [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:41.783 [221/710] Linking static target lib/librte_bpf.a 00:02:41.784 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:41.784 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:42.042 [224/710] Linking static target lib/librte_compressdev.a 00:02:42.042 [225/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:42.042 [226/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.302 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:42.302 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:42.302 [229/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.302 [230/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:42.302 [231/710] Linking static target lib/librte_distributor.a 00:02:42.302 [232/710] Linking target lib/librte_compressdev.so.24.0 00:02:42.561 [233/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:42.561 [234/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:42.561 [235/710] Linking static target lib/librte_dmadev.a 00:02:42.561 [236/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.820 [237/710] Linking target lib/librte_distributor.so.24.0 00:02:42.820 [238/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:43.079 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.079 [240/710] Linking target lib/librte_dmadev.so.24.0 00:02:43.079 [241/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:43.337 [242/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:43.338 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:43.596 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:43.596 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:43.596 [246/710] Linking static target lib/librte_efd.a 00:02:43.855 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:43.855 [248/710] Linking static target lib/librte_cryptodev.a 00:02:43.855 [249/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:43.855 [250/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.855 [251/710] Linking target lib/librte_efd.so.24.0 00:02:44.113 [252/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.371 [253/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:44.371 [254/710] Linking static target lib/librte_dispatcher.a 00:02:44.371 [255/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:44.371 [256/710] Linking target lib/librte_ethdev.so.24.0 00:02:44.371 [257/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:44.371 [258/710] Linking target lib/librte_metrics.so.24.0 00:02:44.630 [259/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:44.630 [260/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:44.630 [261/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:44.630 [262/710] Linking target lib/librte_bitratestats.so.24.0 00:02:44.630 [263/710] Linking static target lib/librte_gpudev.a 00:02:44.630 [264/710] Linking target lib/librte_bpf.so.24.0 00:02:44.630 [265/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.630 [266/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:44.888 [267/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:44.888 [268/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:45.147 [269/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.147 [270/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:45.147 [271/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:45.147 [272/710] Linking target lib/librte_cryptodev.so.24.0 00:02:45.147 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:45.406 [274/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:45.406 [275/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.406 [276/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:45.406 [277/710] Linking target lib/librte_gpudev.so.24.0 00:02:45.406 [278/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:45.406 [279/710] Linking static target lib/librte_eventdev.a 00:02:45.664 [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:45.664 [281/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:45.664 [282/710] Linking static target lib/librte_gro.a 00:02:45.664 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:45.664 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:45.664 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:45.923 [286/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.923 [287/710] Linking target lib/librte_gro.so.24.0 00:02:45.923 [288/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:45.923 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:46.182 [290/710] Linking static target lib/librte_gso.a 00:02:46.182 [291/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:46.182 [292/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.182 [293/710] Linking target lib/librte_gso.so.24.0 00:02:46.440 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:46.440 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:46.440 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:46.440 [297/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:46.440 [298/710] Linking static target lib/librte_jobstats.a 00:02:46.440 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:46.701 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:46.701 [301/710] Linking static target lib/librte_ip_frag.a 00:02:46.701 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:46.701 [303/710] Linking static target lib/librte_latencystats.a 00:02:46.701 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.960 [305/710] Linking target lib/librte_jobstats.so.24.0 00:02:46.960 [306/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.960 [307/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.960 [308/710] Linking target lib/librte_latencystats.so.24.0 00:02:46.960 [309/710] Linking target lib/librte_ip_frag.so.24.0 00:02:46.960 [310/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:46.960 [311/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:46.960 [312/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:47.218 [313/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:47.218 [314/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:47.218 [315/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:47.218 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:47.218 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:47.479 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.479 [319/710] Linking target lib/librte_eventdev.so.24.0 00:02:47.479 [320/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:47.764 [321/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:47.764 [322/710] Linking static target lib/librte_lpm.a 00:02:47.764 [323/710] Linking target lib/librte_dispatcher.so.24.0 00:02:47.764 [324/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:47.764 [325/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:48.037 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:48.037 [327/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:48.037 [328/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:48.037 [329/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.037 [330/710] Linking static target lib/librte_pcapng.a 00:02:48.037 [331/710] Linking target lib/librte_lpm.so.24.0 00:02:48.037 [332/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:48.037 [333/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:48.037 [334/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:48.296 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.296 [336/710] Linking target lib/librte_pcapng.so.24.0 00:02:48.296 [337/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:48.296 [338/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:48.296 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:48.555 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:48.555 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:48.813 [342/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:48.813 [343/710] Linking static target lib/librte_power.a 00:02:48.813 [344/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:48.813 [345/710] Linking static target lib/librte_rawdev.a 00:02:48.813 [346/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:48.813 [347/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:48.813 [348/710] Linking static target lib/librte_regexdev.a 00:02:48.813 [349/710] Linking static target lib/librte_member.a 00:02:48.813 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:48.813 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:49.071 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:49.071 [353/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.071 [354/710] Linking target lib/librte_member.so.24.0 00:02:49.071 [355/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:49.071 [356/710] Linking static target lib/librte_mldev.a 00:02:49.330 [357/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.330 [358/710] Linking target lib/librte_rawdev.so.24.0 00:02:49.330 [359/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:49.330 [360/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.330 [361/710] Linking target lib/librte_power.so.24.0 00:02:49.330 [362/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:49.589 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.589 [364/710] Linking target lib/librte_regexdev.so.24.0 00:02:49.589 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:49.848 [366/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:49.848 [367/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:49.848 [368/710] Linking static target lib/librte_rib.a 00:02:49.848 [369/710] Linking static target lib/librte_reorder.a 00:02:49.848 [370/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:49.848 [371/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:49.848 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:49.848 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:50.108 [374/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.108 [375/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:50.108 [376/710] Linking static target lib/librte_stack.a 00:02:50.108 [377/710] Linking target lib/librte_reorder.so.24.0 00:02:50.108 [378/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:50.108 [379/710] Linking static target lib/librte_security.a 00:02:50.108 [380/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.366 [381/710] Linking target lib/librte_rib.so.24.0 00:02:50.367 [382/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:50.367 [383/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.367 [384/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.367 [385/710] Linking target lib/librte_stack.so.24.0 00:02:50.367 [386/710] Linking target lib/librte_mldev.so.24.0 00:02:50.367 [387/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:50.625 [388/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.625 [389/710] Linking target lib/librte_security.so.24.0 00:02:50.625 [390/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:50.625 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:50.625 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:50.883 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:50.883 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:50.883 [395/710] Linking static target lib/librte_sched.a 00:02:51.142 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:51.142 [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.401 [398/710] Linking target lib/librte_sched.so.24.0 00:02:51.401 [399/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:51.401 [400/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:51.659 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:51.659 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:51.659 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:51.919 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:52.178 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:52.178 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:52.437 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:52.437 [408/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:52.437 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:52.695 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:52.695 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:52.695 [412/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:52.695 [413/710] Linking static target lib/librte_ipsec.a 00:02:52.954 [414/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:52.954 [415/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.954 [416/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:52.954 [417/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:52.954 [418/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:52.954 [419/710] Linking target lib/librte_ipsec.so.24.0 00:02:53.213 [420/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:53.213 [421/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:53.213 [422/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:53.213 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:54.150 [424/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:54.150 [425/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:54.150 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:54.150 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:54.150 [428/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:54.150 [429/710] Linking static target lib/librte_pdcp.a 00:02:54.150 [430/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:54.150 [431/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:54.150 [432/710] Linking static target lib/librte_fib.a 00:02:54.412 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.412 [434/710] Linking target lib/librte_pdcp.so.24.0 00:02:54.412 [435/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.670 [436/710] Linking target lib/librte_fib.so.24.0 00:02:54.670 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:55.237 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:55.237 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:55.237 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:55.237 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:55.237 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:55.496 [443/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:55.496 [444/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:55.755 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:55.755 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:55.755 [447/710] Linking static target lib/librte_port.a 00:02:56.014 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:56.014 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:56.014 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:56.014 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:56.273 [452/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.273 [453/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:56.273 [454/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:56.273 [455/710] Linking target lib/librte_port.so.24.0 00:02:56.273 [456/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:56.273 [457/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:56.273 [458/710] Linking static target lib/librte_pdump.a 00:02:56.532 [459/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:56.532 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.532 [461/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:56.532 [462/710] Linking target lib/librte_pdump.so.24.0 00:02:57.100 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:57.100 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:57.100 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:57.100 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:57.100 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:57.358 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:57.615 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:57.615 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:57.615 [471/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:57.615 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:57.615 [473/710] Linking static target lib/librte_table.a 00:02:58.181 [474/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.181 [475/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:58.439 [476/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:58.439 [477/710] Linking target lib/librte_table.so.24.0 00:02:58.439 [478/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:58.439 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:58.697 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:59.008 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:59.008 [482/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:59.266 [483/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:59.266 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:59.266 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:59.266 [486/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:59.524 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:59.783 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:59.783 [489/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:00.041 [490/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:00.041 [491/710] Linking static target lib/librte_graph.a 00:03:00.041 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:00.041 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:00.607 [494/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:00.607 [495/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:00.607 [496/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.607 [497/710] Linking target lib/librte_graph.so.24.0 00:03:00.607 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:00.607 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:00.865 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:01.138 [501/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:01.138 [502/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:01.138 [503/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:01.138 [504/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:01.138 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:01.397 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:01.657 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:01.657 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:01.920 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:01.920 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:01.920 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:01.920 [512/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:01.920 [513/710] Linking static target lib/librte_node.a 00:03:01.920 [514/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:02.179 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:02.179 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.438 [517/710] Linking target lib/librte_node.so.24.0 00:03:02.438 [518/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:02.438 [519/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:02.438 [520/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:02.438 [521/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:02.697 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:02.697 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.697 [524/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:02.697 [525/710] Linking static target drivers/librte_bus_vdev.a 00:03:02.697 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:02.697 [527/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:02.697 [528/710] Linking static target drivers/librte_bus_pci.a 00:03:02.956 [529/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:02.956 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:02.956 [531/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:02.956 [532/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.956 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:02.956 [534/710] Linking target drivers/librte_bus_vdev.so.24.0 00:03:03.215 [535/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:03.215 [536/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:03.215 [537/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:03.215 [538/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.215 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:03:03.473 [540/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:03.473 [541/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.473 [542/710] Linking static target drivers/librte_mempool_ring.a 00:03:03.473 [543/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.473 [544/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:03.473 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:03:03.732 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:03.990 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:04.247 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:04.247 [549/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:04.247 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:04.247 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:05.181 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:05.181 [553/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:05.181 [554/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:05.181 [555/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:05.181 [556/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:05.440 [557/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:05.699 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:05.958 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:05.958 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:06.217 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:06.217 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:06.785 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:06.785 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:06.785 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:06.785 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:07.351 [567/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:07.352 [568/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:07.352 [569/710] Linking static target lib/librte_vhost.a 00:03:07.352 [570/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:07.610 [571/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:07.610 [572/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:07.611 [573/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:07.611 [574/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:07.611 [575/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:07.870 [576/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:08.129 [577/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:08.129 [578/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:08.129 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:08.388 [580/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:08.388 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:08.388 [582/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:08.388 [583/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.388 [584/710] Linking target lib/librte_vhost.so.24.0 00:03:08.388 [585/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:08.646 [586/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:08.646 [587/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:08.646 [588/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:08.646 [589/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:08.646 [590/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:08.646 [591/710] Linking static target drivers/librte_net_i40e.a 00:03:08.904 [592/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:08.904 [593/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:08.904 [594/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:09.472 [595/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:09.472 [596/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.472 [597/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:09.472 [598/710] Linking target drivers/librte_net_i40e.so.24.0 00:03:09.472 [599/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:10.040 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:10.040 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:10.040 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:10.040 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:10.040 [604/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:10.299 [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:10.299 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:10.559 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:10.818 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:10.818 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:11.077 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:11.077 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:11.077 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:11.077 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:11.335 [614/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:11.335 [615/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:11.335 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:11.335 [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:11.594 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:11.853 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:11.853 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:12.112 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:12.112 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:12.373 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:12.373 [624/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:12.373 [625/710] Linking static target lib/librte_pipeline.a 00:03:12.938 [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:12.938 [627/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:13.197 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:13.197 [629/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:13.197 [630/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:13.455 [631/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:13.455 [632/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:13.455 [633/710] Linking target app/dpdk-dumpcap 00:03:13.455 [634/710] Linking target app/dpdk-graph 00:03:13.715 [635/710] Linking target app/dpdk-pdump 00:03:13.715 [636/710] Linking target app/dpdk-proc-info 00:03:13.715 [637/710] Linking target app/dpdk-test-acl 00:03:13.715 [638/710] Linking target app/dpdk-test-cmdline 00:03:13.715 [639/710] Linking target app/dpdk-test-compress-perf 00:03:13.974 [640/710] Linking target app/dpdk-test-crypto-perf 00:03:13.974 [641/710] Linking target app/dpdk-test-dma-perf 00:03:13.974 [642/710] Linking target app/dpdk-test-fib 00:03:14.232 [643/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:14.232 [644/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:14.490 [645/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:14.490 [646/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:14.490 [647/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:14.490 [648/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:14.490 [649/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:14.748 [650/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:15.007 [651/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:15.007 [652/710] Linking target app/dpdk-test-gpudev 00:03:15.007 [653/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:15.007 [654/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:15.007 [655/710] Linking target app/dpdk-test-eventdev 00:03:15.265 [656/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.265 [657/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:15.265 [658/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:15.265 [659/710] Linking target lib/librte_pipeline.so.24.0 00:03:15.523 [660/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:15.523 [661/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:15.523 [662/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:15.523 [663/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:15.782 [664/710] Linking target app/dpdk-test-flow-perf 00:03:15.782 [665/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:15.782 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:15.782 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:16.040 [668/710] Linking target app/dpdk-test-bbdev 00:03:16.040 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:16.300 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:16.300 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:16.300 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:16.300 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:16.558 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:16.558 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:16.817 [676/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:16.817 [677/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:17.074 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:17.332 [679/710] Linking target app/dpdk-test-mldev 00:03:17.332 [680/710] Linking target app/dpdk-test-pipeline 00:03:17.332 [681/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:17.332 [682/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:17.332 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:17.897 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:17.897 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:17.897 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:18.156 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:18.156 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:18.413 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:18.413 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:18.671 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:18.671 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:18.671 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:19.236 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:19.236 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:19.499 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:19.499 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:19.771 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:19.771 [699/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:20.063 [700/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:20.064 [701/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:20.064 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:20.064 [703/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:20.064 [704/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:20.064 [705/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:20.324 [706/710] Linking target app/dpdk-test-sad 00:03:20.324 [707/710] Linking target app/dpdk-test-regex 00:03:20.582 [708/710] Linking target app/dpdk-testpmd 00:03:20.582 [709/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:21.148 [710/710] Linking target app/dpdk-test-security-perf 00:03:21.148 07:54:32 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:21.148 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:21.148 [0/1] Installing files. 00:03:21.409 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:21.409 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:21.409 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:21.409 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:21.409 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:21.409 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:21.409 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:21.409 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:21.409 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:21.409 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:21.409 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:21.409 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.410 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:21.411 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.412 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:21.413 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:21.414 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:21.414 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.414 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.672 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.673 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.934 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.934 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.934 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.934 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:21.934 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.934 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:21.934 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.934 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:21.934 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:21.934 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:21.934 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.934 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.934 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.934 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.934 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.934 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.934 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.934 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.934 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.934 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.934 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.934 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.934 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.934 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.934 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.934 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.934 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.934 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.934 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.934 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.934 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.934 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.934 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.934 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.934 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.934 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.935 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:21.936 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:21.936 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:21.936 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:21.936 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:21.936 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:21.936 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:21.936 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:21.936 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:21.936 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:21.936 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:21.936 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:21.936 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:21.936 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:21.936 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:21.936 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:21.936 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:21.937 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:21.937 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:21.937 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:21.937 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:21.937 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:21.937 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:21.937 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:21.937 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:21.937 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:21.937 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:21.937 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:21.937 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:21.937 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:21.937 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:21.937 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:21.937 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:21.937 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:21.937 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:21.937 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:21.937 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:21.937 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:21.937 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:21.937 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:21.937 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:21.937 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:21.937 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:21.937 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:21.937 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:21.937 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:21.937 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:21.937 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:21.937 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:21.937 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:21.937 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:21.937 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:21.937 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:21.937 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:21.937 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:21.937 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:21.937 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:21.937 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:21.937 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:21.937 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:21.937 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:21.937 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:21.937 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:21.937 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:21.937 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:21.937 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:21.937 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:21.937 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:21.937 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:21.937 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:21.937 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:21.937 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:21.937 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:21.937 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:21.937 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:21.937 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:21.937 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:21.937 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:21.937 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:21.937 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:21.937 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:21.937 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:21.937 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:21.937 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:21.937 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:21.937 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:21.937 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:21.937 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:21.937 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:21.937 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:21.937 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:21.937 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:21.937 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:21.937 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:21.937 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:21.937 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:21.937 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:21.937 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:21.937 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:21.937 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:21.937 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:21.937 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:21.937 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:21.937 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:21.937 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:21.937 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:21.937 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:21.937 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:21.937 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:21.937 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:21.937 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:21.937 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:21.937 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:21.937 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:21.937 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:21.937 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:21.937 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:21.937 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:21.937 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:21.937 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:21.937 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:21.937 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:21.937 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:21.937 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:21.937 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:21.937 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:21.937 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:21.937 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:21.937 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:21.937 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:21.937 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:21.937 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:21.937 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:21.937 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:21.937 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:22.195 07:54:33 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:22.195 07:54:33 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:22.195 07:54:33 -- common/autobuild_common.sh@203 -- $ cat 00:03:22.195 ************************************ 00:03:22.195 END TEST build_native_dpdk 00:03:22.195 ************************************ 00:03:22.195 07:54:33 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:22.195 00:03:22.195 real 1m0.308s 00:03:22.195 user 7m18.114s 00:03:22.195 sys 1m9.071s 00:03:22.195 07:54:33 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:22.195 07:54:33 -- common/autotest_common.sh@10 -- $ set +x 00:03:22.195 07:54:33 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:22.195 07:54:33 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:22.195 07:54:33 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:22.195 07:54:33 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:22.195 07:54:33 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:22.195 07:54:33 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:22.195 07:54:33 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:22.195 07:54:33 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:03:22.195 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:22.454 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:22.454 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:22.454 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:23.023 Using 'verbs' RDMA provider 00:03:35.800 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:50.680 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:50.680 go version go1.21.1 linux/amd64 00:03:50.680 Creating mk/config.mk...done. 00:03:50.680 Creating mk/cc.flags.mk...done. 00:03:50.680 Type 'make' to build. 00:03:50.680 07:55:00 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:50.680 07:55:00 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:50.680 07:55:00 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:50.680 07:55:00 -- common/autotest_common.sh@10 -- $ set +x 00:03:50.680 ************************************ 00:03:50.680 START TEST make 00:03:50.680 ************************************ 00:03:50.680 07:55:00 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:50.680 make[1]: Nothing to be done for 'all'. 00:04:12.614 CC lib/ut_mock/mock.o 00:04:12.614 CC lib/ut/ut.o 00:04:12.614 CC lib/log/log.o 00:04:12.614 CC lib/log/log_flags.o 00:04:12.614 CC lib/log/log_deprecated.o 00:04:12.614 LIB libspdk_ut_mock.a 00:04:12.614 SO libspdk_ut_mock.so.5.0 00:04:12.614 LIB libspdk_ut.a 00:04:12.614 LIB libspdk_log.a 00:04:12.614 SO libspdk_ut.so.1.0 00:04:12.614 SYMLINK libspdk_ut_mock.so 00:04:12.614 SO libspdk_log.so.6.1 00:04:12.614 SYMLINK libspdk_ut.so 00:04:12.614 SYMLINK libspdk_log.so 00:04:12.614 CC lib/dma/dma.o 00:04:12.614 CC lib/util/base64.o 00:04:12.614 CC lib/util/bit_array.o 00:04:12.614 CC lib/util/cpuset.o 00:04:12.614 CC lib/util/crc16.o 00:04:12.614 CC lib/util/crc32.o 00:04:12.614 CC lib/util/crc32c.o 00:04:12.614 CXX lib/trace_parser/trace.o 00:04:12.614 CC lib/ioat/ioat.o 00:04:12.614 CC lib/vfio_user/host/vfio_user_pci.o 00:04:12.614 CC lib/util/crc32_ieee.o 00:04:12.614 CC lib/vfio_user/host/vfio_user.o 00:04:12.614 CC lib/util/crc64.o 00:04:12.614 LIB libspdk_dma.a 00:04:12.614 CC lib/util/dif.o 00:04:12.614 SO libspdk_dma.so.3.0 00:04:12.614 CC lib/util/fd.o 00:04:12.614 SYMLINK libspdk_dma.so 00:04:12.614 CC lib/util/file.o 00:04:12.614 CC lib/util/hexlify.o 00:04:12.614 LIB libspdk_ioat.a 00:04:12.614 CC lib/util/iov.o 00:04:12.614 CC lib/util/math.o 00:04:12.614 SO libspdk_ioat.so.6.0 00:04:12.614 CC lib/util/pipe.o 00:04:12.614 SYMLINK libspdk_ioat.so 00:04:12.614 CC lib/util/strerror_tls.o 00:04:12.614 CC lib/util/string.o 00:04:12.614 CC lib/util/uuid.o 00:04:12.614 LIB libspdk_vfio_user.a 00:04:12.614 CC lib/util/fd_group.o 00:04:12.614 SO libspdk_vfio_user.so.4.0 00:04:12.614 CC lib/util/xor.o 00:04:12.614 CC lib/util/zipf.o 00:04:12.614 SYMLINK libspdk_vfio_user.so 00:04:12.614 LIB libspdk_util.a 00:04:12.614 SO libspdk_util.so.8.0 00:04:12.873 SYMLINK libspdk_util.so 00:04:12.873 LIB libspdk_trace_parser.a 00:04:12.873 CC lib/conf/conf.o 00:04:12.873 CC lib/idxd/idxd.o 00:04:12.873 CC lib/idxd/idxd_kernel.o 00:04:12.873 CC lib/vmd/vmd.o 00:04:12.873 CC lib/idxd/idxd_user.o 00:04:12.873 CC lib/json/json_parse.o 00:04:12.873 CC lib/vmd/led.o 00:04:12.873 CC lib/env_dpdk/env.o 00:04:12.873 CC lib/rdma/common.o 00:04:12.873 SO libspdk_trace_parser.so.4.0 00:04:13.132 SYMLINK libspdk_trace_parser.so 00:04:13.132 CC lib/env_dpdk/memory.o 00:04:13.132 CC lib/env_dpdk/pci.o 00:04:13.132 CC lib/env_dpdk/init.o 00:04:13.132 LIB libspdk_conf.a 00:04:13.132 CC lib/json/json_util.o 00:04:13.132 CC lib/json/json_write.o 00:04:13.132 SO libspdk_conf.so.5.0 00:04:13.132 CC lib/rdma/rdma_verbs.o 00:04:13.132 SYMLINK libspdk_conf.so 00:04:13.132 CC lib/env_dpdk/threads.o 00:04:13.391 CC lib/env_dpdk/pci_ioat.o 00:04:13.391 CC lib/env_dpdk/pci_virtio.o 00:04:13.391 CC lib/env_dpdk/pci_vmd.o 00:04:13.391 LIB libspdk_rdma.a 00:04:13.391 CC lib/env_dpdk/pci_idxd.o 00:04:13.391 SO libspdk_rdma.so.5.0 00:04:13.391 LIB libspdk_json.a 00:04:13.391 SO libspdk_json.so.5.1 00:04:13.392 LIB libspdk_idxd.a 00:04:13.392 SYMLINK libspdk_rdma.so 00:04:13.392 CC lib/env_dpdk/pci_event.o 00:04:13.392 CC lib/env_dpdk/sigbus_handler.o 00:04:13.392 CC lib/env_dpdk/pci_dpdk.o 00:04:13.392 SO libspdk_idxd.so.11.0 00:04:13.651 SYMLINK libspdk_json.so 00:04:13.651 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:13.651 LIB libspdk_vmd.a 00:04:13.651 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:13.651 SO libspdk_vmd.so.5.0 00:04:13.651 SYMLINK libspdk_idxd.so 00:04:13.651 SYMLINK libspdk_vmd.so 00:04:13.651 CC lib/jsonrpc/jsonrpc_server.o 00:04:13.651 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:13.651 CC lib/jsonrpc/jsonrpc_client.o 00:04:13.651 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:13.910 LIB libspdk_jsonrpc.a 00:04:13.910 SO libspdk_jsonrpc.so.5.1 00:04:13.910 SYMLINK libspdk_jsonrpc.so 00:04:14.169 CC lib/rpc/rpc.o 00:04:14.169 LIB libspdk_env_dpdk.a 00:04:14.428 SO libspdk_env_dpdk.so.13.0 00:04:14.428 LIB libspdk_rpc.a 00:04:14.428 SO libspdk_rpc.so.5.0 00:04:14.428 SYMLINK libspdk_rpc.so 00:04:14.428 SYMLINK libspdk_env_dpdk.so 00:04:14.428 CC lib/sock/sock.o 00:04:14.428 CC lib/sock/sock_rpc.o 00:04:14.428 CC lib/notify/notify.o 00:04:14.428 CC lib/notify/notify_rpc.o 00:04:14.428 CC lib/trace/trace.o 00:04:14.428 CC lib/trace/trace_flags.o 00:04:14.428 CC lib/trace/trace_rpc.o 00:04:14.686 LIB libspdk_notify.a 00:04:14.686 SO libspdk_notify.so.5.0 00:04:14.686 LIB libspdk_trace.a 00:04:14.686 SYMLINK libspdk_notify.so 00:04:14.957 SO libspdk_trace.so.9.0 00:04:14.957 SYMLINK libspdk_trace.so 00:04:14.957 LIB libspdk_sock.a 00:04:14.957 SO libspdk_sock.so.8.0 00:04:14.957 SYMLINK libspdk_sock.so 00:04:14.957 CC lib/thread/thread.o 00:04:14.957 CC lib/thread/iobuf.o 00:04:15.251 CC lib/nvme/nvme_ctrlr.o 00:04:15.251 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:15.251 CC lib/nvme/nvme_fabric.o 00:04:15.251 CC lib/nvme/nvme_ns.o 00:04:15.251 CC lib/nvme/nvme_ns_cmd.o 00:04:15.251 CC lib/nvme/nvme_qpair.o 00:04:15.251 CC lib/nvme/nvme_pcie.o 00:04:15.251 CC lib/nvme/nvme_pcie_common.o 00:04:15.251 CC lib/nvme/nvme.o 00:04:15.830 CC lib/nvme/nvme_quirks.o 00:04:15.830 CC lib/nvme/nvme_transport.o 00:04:15.830 CC lib/nvme/nvme_discovery.o 00:04:16.090 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:16.090 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:16.090 CC lib/nvme/nvme_tcp.o 00:04:16.350 CC lib/nvme/nvme_opal.o 00:04:16.350 CC lib/nvme/nvme_io_msg.o 00:04:16.350 CC lib/nvme/nvme_poll_group.o 00:04:16.610 LIB libspdk_thread.a 00:04:16.610 SO libspdk_thread.so.9.0 00:04:16.610 CC lib/nvme/nvme_zns.o 00:04:16.610 CC lib/nvme/nvme_cuse.o 00:04:16.610 SYMLINK libspdk_thread.so 00:04:16.610 CC lib/nvme/nvme_vfio_user.o 00:04:16.610 CC lib/nvme/nvme_rdma.o 00:04:16.869 CC lib/blob/blobstore.o 00:04:16.869 CC lib/accel/accel.o 00:04:16.869 CC lib/blob/request.o 00:04:16.869 CC lib/accel/accel_rpc.o 00:04:17.128 CC lib/accel/accel_sw.o 00:04:17.128 CC lib/blob/zeroes.o 00:04:17.128 CC lib/blob/blob_bs_dev.o 00:04:17.388 CC lib/init/json_config.o 00:04:17.388 CC lib/init/subsystem.o 00:04:17.388 CC lib/init/subsystem_rpc.o 00:04:17.388 CC lib/virtio/virtio.o 00:04:17.388 CC lib/virtio/virtio_vhost_user.o 00:04:17.388 CC lib/virtio/virtio_vfio_user.o 00:04:17.647 CC lib/init/rpc.o 00:04:17.647 CC lib/virtio/virtio_pci.o 00:04:17.647 LIB libspdk_init.a 00:04:17.647 SO libspdk_init.so.4.0 00:04:17.908 LIB libspdk_accel.a 00:04:17.908 SYMLINK libspdk_init.so 00:04:17.908 SO libspdk_accel.so.14.0 00:04:17.908 LIB libspdk_virtio.a 00:04:17.908 SO libspdk_virtio.so.6.0 00:04:17.908 SYMLINK libspdk_accel.so 00:04:17.908 CC lib/event/app.o 00:04:17.908 CC lib/event/app_rpc.o 00:04:17.908 CC lib/event/reactor.o 00:04:17.908 CC lib/event/log_rpc.o 00:04:17.908 CC lib/event/scheduler_static.o 00:04:17.908 SYMLINK libspdk_virtio.so 00:04:18.166 LIB libspdk_nvme.a 00:04:18.166 CC lib/bdev/bdev.o 00:04:18.166 CC lib/bdev/bdev_rpc.o 00:04:18.166 CC lib/bdev/bdev_zone.o 00:04:18.166 CC lib/bdev/part.o 00:04:18.166 CC lib/bdev/scsi_nvme.o 00:04:18.424 SO libspdk_nvme.so.12.0 00:04:18.424 LIB libspdk_event.a 00:04:18.424 SO libspdk_event.so.12.0 00:04:18.424 SYMLINK libspdk_event.so 00:04:18.424 SYMLINK libspdk_nvme.so 00:04:19.358 LIB libspdk_blob.a 00:04:19.358 SO libspdk_blob.so.10.1 00:04:19.616 SYMLINK libspdk_blob.so 00:04:19.874 CC lib/lvol/lvol.o 00:04:19.874 CC lib/blobfs/blobfs.o 00:04:19.874 CC lib/blobfs/tree.o 00:04:20.440 LIB libspdk_bdev.a 00:04:20.440 SO libspdk_bdev.so.14.0 00:04:20.440 LIB libspdk_lvol.a 00:04:20.440 LIB libspdk_blobfs.a 00:04:20.697 SO libspdk_lvol.so.9.1 00:04:20.697 SO libspdk_blobfs.so.9.0 00:04:20.697 SYMLINK libspdk_bdev.so 00:04:20.697 SYMLINK libspdk_lvol.so 00:04:20.697 SYMLINK libspdk_blobfs.so 00:04:20.697 CC lib/nvmf/ctrlr.o 00:04:20.697 CC lib/nvmf/ctrlr_discovery.o 00:04:20.697 CC lib/nvmf/ctrlr_bdev.o 00:04:20.698 CC lib/nvmf/subsystem.o 00:04:20.698 CC lib/nvmf/nvmf.o 00:04:20.698 CC lib/nvmf/nvmf_rpc.o 00:04:20.698 CC lib/nbd/nbd.o 00:04:20.698 CC lib/scsi/dev.o 00:04:20.698 CC lib/ftl/ftl_core.o 00:04:20.698 CC lib/ublk/ublk.o 00:04:20.955 CC lib/scsi/lun.o 00:04:21.213 CC lib/nbd/nbd_rpc.o 00:04:21.213 CC lib/ftl/ftl_init.o 00:04:21.213 LIB libspdk_nbd.a 00:04:21.213 CC lib/nvmf/transport.o 00:04:21.213 SO libspdk_nbd.so.6.0 00:04:21.213 CC lib/ftl/ftl_layout.o 00:04:21.471 CC lib/scsi/port.o 00:04:21.471 SYMLINK libspdk_nbd.so 00:04:21.471 CC lib/ftl/ftl_debug.o 00:04:21.471 CC lib/ublk/ublk_rpc.o 00:04:21.471 CC lib/nvmf/tcp.o 00:04:21.471 CC lib/scsi/scsi.o 00:04:21.471 CC lib/nvmf/rdma.o 00:04:21.471 LIB libspdk_ublk.a 00:04:21.728 SO libspdk_ublk.so.2.0 00:04:21.728 CC lib/scsi/scsi_bdev.o 00:04:21.728 CC lib/scsi/scsi_pr.o 00:04:21.728 CC lib/ftl/ftl_io.o 00:04:21.728 SYMLINK libspdk_ublk.so 00:04:21.728 CC lib/ftl/ftl_sb.o 00:04:21.728 CC lib/ftl/ftl_l2p.o 00:04:21.986 CC lib/ftl/ftl_l2p_flat.o 00:04:21.986 CC lib/ftl/ftl_nv_cache.o 00:04:21.986 CC lib/ftl/ftl_band.o 00:04:21.986 CC lib/ftl/ftl_band_ops.o 00:04:21.986 CC lib/ftl/ftl_writer.o 00:04:21.986 CC lib/scsi/scsi_rpc.o 00:04:21.986 CC lib/ftl/ftl_rq.o 00:04:21.986 CC lib/ftl/ftl_reloc.o 00:04:22.244 CC lib/scsi/task.o 00:04:22.244 CC lib/ftl/ftl_l2p_cache.o 00:04:22.244 CC lib/ftl/ftl_p2l.o 00:04:22.244 CC lib/ftl/mngt/ftl_mngt.o 00:04:22.244 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:22.244 LIB libspdk_scsi.a 00:04:22.244 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:22.503 SO libspdk_scsi.so.8.0 00:04:22.503 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:22.503 SYMLINK libspdk_scsi.so 00:04:22.503 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:22.503 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:22.503 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:22.503 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:22.762 CC lib/iscsi/conn.o 00:04:22.762 CC lib/vhost/vhost.o 00:04:22.762 CC lib/iscsi/init_grp.o 00:04:22.762 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:22.762 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:22.762 CC lib/iscsi/iscsi.o 00:04:22.762 CC lib/iscsi/md5.o 00:04:22.762 CC lib/iscsi/param.o 00:04:23.021 CC lib/iscsi/portal_grp.o 00:04:23.021 CC lib/iscsi/tgt_node.o 00:04:23.021 CC lib/iscsi/iscsi_subsystem.o 00:04:23.021 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:23.021 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:23.021 CC lib/iscsi/iscsi_rpc.o 00:04:23.280 CC lib/iscsi/task.o 00:04:23.280 CC lib/vhost/vhost_rpc.o 00:04:23.280 CC lib/vhost/vhost_scsi.o 00:04:23.280 CC lib/vhost/vhost_blk.o 00:04:23.280 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:23.539 CC lib/ftl/utils/ftl_conf.o 00:04:23.539 CC lib/vhost/rte_vhost_user.o 00:04:23.539 CC lib/ftl/utils/ftl_md.o 00:04:23.539 CC lib/ftl/utils/ftl_mempool.o 00:04:23.539 LIB libspdk_nvmf.a 00:04:23.539 CC lib/ftl/utils/ftl_bitmap.o 00:04:23.539 SO libspdk_nvmf.so.17.0 00:04:23.539 CC lib/ftl/utils/ftl_property.o 00:04:23.798 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:23.798 SYMLINK libspdk_nvmf.so 00:04:23.798 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:23.798 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:23.798 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:23.798 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:23.798 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:24.056 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:24.056 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:24.056 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:24.056 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:24.056 CC lib/ftl/base/ftl_base_dev.o 00:04:24.056 CC lib/ftl/base/ftl_base_bdev.o 00:04:24.056 LIB libspdk_iscsi.a 00:04:24.314 SO libspdk_iscsi.so.7.0 00:04:24.314 CC lib/ftl/ftl_trace.o 00:04:24.314 SYMLINK libspdk_iscsi.so 00:04:24.314 LIB libspdk_ftl.a 00:04:24.572 LIB libspdk_vhost.a 00:04:24.572 SO libspdk_vhost.so.7.1 00:04:24.572 SYMLINK libspdk_vhost.so 00:04:24.572 SO libspdk_ftl.so.8.0 00:04:24.830 SYMLINK libspdk_ftl.so 00:04:25.090 CC module/env_dpdk/env_dpdk_rpc.o 00:04:25.090 CC module/accel/iaa/accel_iaa.o 00:04:25.090 CC module/accel/error/accel_error.o 00:04:25.090 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:25.090 CC module/accel/ioat/accel_ioat.o 00:04:25.090 CC module/accel/dsa/accel_dsa.o 00:04:25.090 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:25.090 CC module/blob/bdev/blob_bdev.o 00:04:25.090 CC module/sock/posix/posix.o 00:04:25.090 CC module/scheduler/gscheduler/gscheduler.o 00:04:25.090 LIB libspdk_env_dpdk_rpc.a 00:04:25.349 SO libspdk_env_dpdk_rpc.so.5.0 00:04:25.349 LIB libspdk_scheduler_dpdk_governor.a 00:04:25.349 SYMLINK libspdk_env_dpdk_rpc.so 00:04:25.349 LIB libspdk_scheduler_gscheduler.a 00:04:25.349 CC module/accel/dsa/accel_dsa_rpc.o 00:04:25.349 CC module/accel/error/accel_error_rpc.o 00:04:25.349 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:25.349 SO libspdk_scheduler_gscheduler.so.3.0 00:04:25.349 CC module/accel/ioat/accel_ioat_rpc.o 00:04:25.349 CC module/accel/iaa/accel_iaa_rpc.o 00:04:25.349 LIB libspdk_scheduler_dynamic.a 00:04:25.349 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:25.349 SO libspdk_scheduler_dynamic.so.3.0 00:04:25.349 SYMLINK libspdk_scheduler_gscheduler.so 00:04:25.349 LIB libspdk_blob_bdev.a 00:04:25.349 SYMLINK libspdk_scheduler_dynamic.so 00:04:25.349 SO libspdk_blob_bdev.so.10.1 00:04:25.349 LIB libspdk_accel_dsa.a 00:04:25.349 LIB libspdk_accel_ioat.a 00:04:25.349 LIB libspdk_accel_error.a 00:04:25.349 SO libspdk_accel_dsa.so.4.0 00:04:25.608 LIB libspdk_accel_iaa.a 00:04:25.608 SYMLINK libspdk_blob_bdev.so 00:04:25.608 SO libspdk_accel_ioat.so.5.0 00:04:25.608 SO libspdk_accel_error.so.1.0 00:04:25.608 SO libspdk_accel_iaa.so.2.0 00:04:25.608 SYMLINK libspdk_accel_dsa.so 00:04:25.608 SYMLINK libspdk_accel_error.so 00:04:25.608 SYMLINK libspdk_accel_ioat.so 00:04:25.608 SYMLINK libspdk_accel_iaa.so 00:04:25.608 CC module/blobfs/bdev/blobfs_bdev.o 00:04:25.608 CC module/bdev/delay/vbdev_delay.o 00:04:25.608 CC module/bdev/error/vbdev_error.o 00:04:25.608 CC module/bdev/malloc/bdev_malloc.o 00:04:25.608 CC module/bdev/gpt/gpt.o 00:04:25.608 CC module/bdev/lvol/vbdev_lvol.o 00:04:25.608 CC module/bdev/null/bdev_null.o 00:04:25.608 CC module/bdev/passthru/vbdev_passthru.o 00:04:25.608 CC module/bdev/nvme/bdev_nvme.o 00:04:25.867 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:25.867 LIB libspdk_sock_posix.a 00:04:25.867 CC module/bdev/gpt/vbdev_gpt.o 00:04:25.867 SO libspdk_sock_posix.so.5.0 00:04:25.867 CC module/bdev/null/bdev_null_rpc.o 00:04:25.867 SYMLINK libspdk_sock_posix.so 00:04:25.867 CC module/bdev/error/vbdev_error_rpc.o 00:04:25.867 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:25.867 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:26.126 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:26.126 LIB libspdk_blobfs_bdev.a 00:04:26.126 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:26.126 SO libspdk_blobfs_bdev.so.5.0 00:04:26.126 SYMLINK libspdk_blobfs_bdev.so 00:04:26.126 LIB libspdk_bdev_error.a 00:04:26.126 CC module/bdev/nvme/nvme_rpc.o 00:04:26.126 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:26.126 LIB libspdk_bdev_null.a 00:04:26.126 LIB libspdk_bdev_passthru.a 00:04:26.126 SO libspdk_bdev_error.so.5.0 00:04:26.126 LIB libspdk_bdev_gpt.a 00:04:26.126 LIB libspdk_bdev_malloc.a 00:04:26.126 SO libspdk_bdev_null.so.5.0 00:04:26.126 SO libspdk_bdev_passthru.so.5.0 00:04:26.126 SO libspdk_bdev_gpt.so.5.0 00:04:26.126 SO libspdk_bdev_malloc.so.5.0 00:04:26.385 SYMLINK libspdk_bdev_error.so 00:04:26.385 LIB libspdk_bdev_delay.a 00:04:26.385 SYMLINK libspdk_bdev_passthru.so 00:04:26.385 SYMLINK libspdk_bdev_null.so 00:04:26.385 SYMLINK libspdk_bdev_gpt.so 00:04:26.385 CC module/bdev/nvme/bdev_mdns_client.o 00:04:26.385 SYMLINK libspdk_bdev_malloc.so 00:04:26.385 SO libspdk_bdev_delay.so.5.0 00:04:26.385 SYMLINK libspdk_bdev_delay.so 00:04:26.385 CC module/bdev/nvme/vbdev_opal.o 00:04:26.385 CC module/bdev/raid/bdev_raid.o 00:04:26.385 CC module/bdev/split/vbdev_split.o 00:04:26.385 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:26.385 CC module/bdev/split/vbdev_split_rpc.o 00:04:26.385 CC module/bdev/aio/bdev_aio.o 00:04:26.385 LIB libspdk_bdev_lvol.a 00:04:26.644 SO libspdk_bdev_lvol.so.5.0 00:04:26.644 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:26.644 CC module/bdev/aio/bdev_aio_rpc.o 00:04:26.644 SYMLINK libspdk_bdev_lvol.so 00:04:26.644 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:26.644 LIB libspdk_bdev_split.a 00:04:26.644 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:26.644 SO libspdk_bdev_split.so.5.0 00:04:26.644 CC module/bdev/ftl/bdev_ftl.o 00:04:26.644 CC module/bdev/raid/bdev_raid_rpc.o 00:04:26.644 CC module/bdev/raid/bdev_raid_sb.o 00:04:26.644 SYMLINK libspdk_bdev_split.so 00:04:26.644 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:26.903 LIB libspdk_bdev_aio.a 00:04:26.903 LIB libspdk_bdev_zone_block.a 00:04:26.903 SO libspdk_bdev_aio.so.5.0 00:04:26.903 SO libspdk_bdev_zone_block.so.5.0 00:04:26.903 CC module/bdev/iscsi/bdev_iscsi.o 00:04:26.903 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:26.903 SYMLINK libspdk_bdev_aio.so 00:04:26.903 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:26.903 SYMLINK libspdk_bdev_zone_block.so 00:04:26.903 CC module/bdev/raid/raid0.o 00:04:26.903 CC module/bdev/raid/raid1.o 00:04:26.903 CC module/bdev/raid/concat.o 00:04:26.903 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:26.903 LIB libspdk_bdev_ftl.a 00:04:27.162 SO libspdk_bdev_ftl.so.5.0 00:04:27.162 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:27.163 SYMLINK libspdk_bdev_ftl.so 00:04:27.163 LIB libspdk_bdev_iscsi.a 00:04:27.163 SO libspdk_bdev_iscsi.so.5.0 00:04:27.163 LIB libspdk_bdev_raid.a 00:04:27.422 SYMLINK libspdk_bdev_iscsi.so 00:04:27.422 SO libspdk_bdev_raid.so.5.0 00:04:27.422 LIB libspdk_bdev_virtio.a 00:04:27.422 SYMLINK libspdk_bdev_raid.so 00:04:27.422 SO libspdk_bdev_virtio.so.5.0 00:04:27.422 SYMLINK libspdk_bdev_virtio.so 00:04:27.995 LIB libspdk_bdev_nvme.a 00:04:27.995 SO libspdk_bdev_nvme.so.6.0 00:04:27.995 SYMLINK libspdk_bdev_nvme.so 00:04:28.254 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:28.254 CC module/event/subsystems/scheduler/scheduler.o 00:04:28.254 CC module/event/subsystems/vmd/vmd.o 00:04:28.254 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:28.254 CC module/event/subsystems/sock/sock.o 00:04:28.254 CC module/event/subsystems/iobuf/iobuf.o 00:04:28.254 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:28.512 LIB libspdk_event_sock.a 00:04:28.512 SO libspdk_event_sock.so.4.0 00:04:28.512 LIB libspdk_event_iobuf.a 00:04:28.512 LIB libspdk_event_vhost_blk.a 00:04:28.512 LIB libspdk_event_vmd.a 00:04:28.512 LIB libspdk_event_scheduler.a 00:04:28.512 SO libspdk_event_iobuf.so.2.0 00:04:28.512 SO libspdk_event_vhost_blk.so.2.0 00:04:28.512 SO libspdk_event_vmd.so.5.0 00:04:28.512 SYMLINK libspdk_event_sock.so 00:04:28.512 SO libspdk_event_scheduler.so.3.0 00:04:28.512 SYMLINK libspdk_event_iobuf.so 00:04:28.512 SYMLINK libspdk_event_vmd.so 00:04:28.512 SYMLINK libspdk_event_scheduler.so 00:04:28.512 SYMLINK libspdk_event_vhost_blk.so 00:04:28.770 CC module/event/subsystems/accel/accel.o 00:04:29.028 LIB libspdk_event_accel.a 00:04:29.028 SO libspdk_event_accel.so.5.0 00:04:29.028 SYMLINK libspdk_event_accel.so 00:04:29.286 CC module/event/subsystems/bdev/bdev.o 00:04:29.286 LIB libspdk_event_bdev.a 00:04:29.552 SO libspdk_event_bdev.so.5.0 00:04:29.552 SYMLINK libspdk_event_bdev.so 00:04:29.552 CC module/event/subsystems/ublk/ublk.o 00:04:29.552 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:29.552 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:29.552 CC module/event/subsystems/nbd/nbd.o 00:04:29.552 CC module/event/subsystems/scsi/scsi.o 00:04:29.823 LIB libspdk_event_ublk.a 00:04:29.823 SO libspdk_event_ublk.so.2.0 00:04:29.823 LIB libspdk_event_nbd.a 00:04:29.823 LIB libspdk_event_scsi.a 00:04:29.823 SO libspdk_event_nbd.so.5.0 00:04:29.823 SYMLINK libspdk_event_ublk.so 00:04:29.823 SO libspdk_event_scsi.so.5.0 00:04:29.823 SYMLINK libspdk_event_scsi.so 00:04:29.823 SYMLINK libspdk_event_nbd.so 00:04:29.823 LIB libspdk_event_nvmf.a 00:04:30.083 SO libspdk_event_nvmf.so.5.0 00:04:30.083 SYMLINK libspdk_event_nvmf.so 00:04:30.083 CC module/event/subsystems/iscsi/iscsi.o 00:04:30.083 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:30.342 LIB libspdk_event_vhost_scsi.a 00:04:30.342 LIB libspdk_event_iscsi.a 00:04:30.342 SO libspdk_event_vhost_scsi.so.2.0 00:04:30.342 SO libspdk_event_iscsi.so.5.0 00:04:30.342 SYMLINK libspdk_event_vhost_scsi.so 00:04:30.342 SYMLINK libspdk_event_iscsi.so 00:04:30.601 SO libspdk.so.5.0 00:04:30.601 SYMLINK libspdk.so 00:04:30.601 CXX app/trace/trace.o 00:04:30.601 CC app/trace_record/trace_record.o 00:04:30.601 CC app/iscsi_tgt/iscsi_tgt.o 00:04:30.601 CC app/nvmf_tgt/nvmf_main.o 00:04:30.859 CC app/spdk_tgt/spdk_tgt.o 00:04:30.859 CC examples/accel/perf/accel_perf.o 00:04:30.859 CC test/blobfs/mkfs/mkfs.o 00:04:30.859 CC test/accel/dif/dif.o 00:04:30.859 CC test/app/bdev_svc/bdev_svc.o 00:04:30.859 CC test/bdev/bdevio/bdevio.o 00:04:30.859 LINK nvmf_tgt 00:04:30.859 LINK iscsi_tgt 00:04:31.117 LINK spdk_trace_record 00:04:31.117 LINK spdk_tgt 00:04:31.117 LINK bdev_svc 00:04:31.117 LINK mkfs 00:04:31.117 LINK spdk_trace 00:04:31.117 TEST_HEADER include/spdk/accel.h 00:04:31.117 TEST_HEADER include/spdk/accel_module.h 00:04:31.117 TEST_HEADER include/spdk/assert.h 00:04:31.117 TEST_HEADER include/spdk/barrier.h 00:04:31.117 TEST_HEADER include/spdk/base64.h 00:04:31.117 TEST_HEADER include/spdk/bdev.h 00:04:31.117 TEST_HEADER include/spdk/bdev_module.h 00:04:31.117 TEST_HEADER include/spdk/bdev_zone.h 00:04:31.117 TEST_HEADER include/spdk/bit_array.h 00:04:31.117 TEST_HEADER include/spdk/bit_pool.h 00:04:31.117 TEST_HEADER include/spdk/blob_bdev.h 00:04:31.117 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:31.117 TEST_HEADER include/spdk/blobfs.h 00:04:31.117 TEST_HEADER include/spdk/blob.h 00:04:31.117 TEST_HEADER include/spdk/conf.h 00:04:31.117 TEST_HEADER include/spdk/config.h 00:04:31.117 TEST_HEADER include/spdk/cpuset.h 00:04:31.117 TEST_HEADER include/spdk/crc16.h 00:04:31.117 TEST_HEADER include/spdk/crc32.h 00:04:31.117 TEST_HEADER include/spdk/crc64.h 00:04:31.117 TEST_HEADER include/spdk/dif.h 00:04:31.117 TEST_HEADER include/spdk/dma.h 00:04:31.117 TEST_HEADER include/spdk/endian.h 00:04:31.117 TEST_HEADER include/spdk/env_dpdk.h 00:04:31.117 TEST_HEADER include/spdk/env.h 00:04:31.117 TEST_HEADER include/spdk/event.h 00:04:31.117 TEST_HEADER include/spdk/fd_group.h 00:04:31.117 LINK dif 00:04:31.117 TEST_HEADER include/spdk/fd.h 00:04:31.117 LINK bdevio 00:04:31.117 TEST_HEADER include/spdk/file.h 00:04:31.117 TEST_HEADER include/spdk/ftl.h 00:04:31.117 TEST_HEADER include/spdk/gpt_spec.h 00:04:31.376 TEST_HEADER include/spdk/hexlify.h 00:04:31.376 TEST_HEADER include/spdk/histogram_data.h 00:04:31.376 TEST_HEADER include/spdk/idxd.h 00:04:31.376 TEST_HEADER include/spdk/idxd_spec.h 00:04:31.376 TEST_HEADER include/spdk/init.h 00:04:31.376 TEST_HEADER include/spdk/ioat.h 00:04:31.376 TEST_HEADER include/spdk/ioat_spec.h 00:04:31.376 TEST_HEADER include/spdk/iscsi_spec.h 00:04:31.376 TEST_HEADER include/spdk/json.h 00:04:31.376 TEST_HEADER include/spdk/jsonrpc.h 00:04:31.376 TEST_HEADER include/spdk/likely.h 00:04:31.376 TEST_HEADER include/spdk/log.h 00:04:31.376 TEST_HEADER include/spdk/lvol.h 00:04:31.376 TEST_HEADER include/spdk/memory.h 00:04:31.376 TEST_HEADER include/spdk/mmio.h 00:04:31.376 TEST_HEADER include/spdk/nbd.h 00:04:31.376 TEST_HEADER include/spdk/notify.h 00:04:31.376 TEST_HEADER include/spdk/nvme.h 00:04:31.376 TEST_HEADER include/spdk/nvme_intel.h 00:04:31.376 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:31.376 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:31.376 TEST_HEADER include/spdk/nvme_spec.h 00:04:31.376 TEST_HEADER include/spdk/nvme_zns.h 00:04:31.376 LINK accel_perf 00:04:31.376 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:31.376 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:31.376 TEST_HEADER include/spdk/nvmf.h 00:04:31.376 TEST_HEADER include/spdk/nvmf_spec.h 00:04:31.376 TEST_HEADER include/spdk/nvmf_transport.h 00:04:31.376 TEST_HEADER include/spdk/opal.h 00:04:31.376 TEST_HEADER include/spdk/opal_spec.h 00:04:31.376 TEST_HEADER include/spdk/pci_ids.h 00:04:31.376 TEST_HEADER include/spdk/pipe.h 00:04:31.376 CC test/app/histogram_perf/histogram_perf.o 00:04:31.376 CC examples/bdev/hello_world/hello_bdev.o 00:04:31.376 TEST_HEADER include/spdk/queue.h 00:04:31.376 TEST_HEADER include/spdk/reduce.h 00:04:31.376 TEST_HEADER include/spdk/rpc.h 00:04:31.376 TEST_HEADER include/spdk/scheduler.h 00:04:31.376 TEST_HEADER include/spdk/scsi.h 00:04:31.376 CC test/dma/test_dma/test_dma.o 00:04:31.376 TEST_HEADER include/spdk/scsi_spec.h 00:04:31.376 TEST_HEADER include/spdk/sock.h 00:04:31.376 TEST_HEADER include/spdk/stdinc.h 00:04:31.376 TEST_HEADER include/spdk/string.h 00:04:31.376 TEST_HEADER include/spdk/thread.h 00:04:31.376 TEST_HEADER include/spdk/trace.h 00:04:31.376 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:31.376 TEST_HEADER include/spdk/trace_parser.h 00:04:31.376 TEST_HEADER include/spdk/tree.h 00:04:31.376 TEST_HEADER include/spdk/ublk.h 00:04:31.376 TEST_HEADER include/spdk/util.h 00:04:31.376 TEST_HEADER include/spdk/uuid.h 00:04:31.376 TEST_HEADER include/spdk/version.h 00:04:31.376 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:31.376 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:31.376 TEST_HEADER include/spdk/vhost.h 00:04:31.376 TEST_HEADER include/spdk/vmd.h 00:04:31.376 TEST_HEADER include/spdk/xor.h 00:04:31.376 TEST_HEADER include/spdk/zipf.h 00:04:31.376 CXX test/cpp_headers/accel.o 00:04:31.376 CC app/spdk_lspci/spdk_lspci.o 00:04:31.376 LINK histogram_perf 00:04:31.376 CC test/env/mem_callbacks/mem_callbacks.o 00:04:31.635 LINK spdk_lspci 00:04:31.635 CXX test/cpp_headers/accel_module.o 00:04:31.635 LINK hello_bdev 00:04:31.635 CC test/event/event_perf/event_perf.o 00:04:31.635 CC test/nvme/aer/aer.o 00:04:31.635 CC test/lvol/esnap/esnap.o 00:04:31.635 CC test/rpc_client/rpc_client_test.o 00:04:31.635 CC app/spdk_nvme_perf/perf.o 00:04:31.635 LINK nvme_fuzz 00:04:31.635 CXX test/cpp_headers/assert.o 00:04:31.635 LINK test_dma 00:04:31.635 LINK event_perf 00:04:31.894 LINK rpc_client_test 00:04:31.894 LINK aer 00:04:31.894 CC examples/bdev/bdevperf/bdevperf.o 00:04:31.894 CXX test/cpp_headers/barrier.o 00:04:31.894 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:31.894 CC test/event/reactor/reactor.o 00:04:31.894 CC test/app/jsoncat/jsoncat.o 00:04:32.161 CXX test/cpp_headers/base64.o 00:04:32.161 CC test/nvme/reset/reset.o 00:04:32.161 CC test/app/stub/stub.o 00:04:32.161 LINK mem_callbacks 00:04:32.161 LINK reactor 00:04:32.161 LINK jsoncat 00:04:32.161 CXX test/cpp_headers/bdev.o 00:04:32.161 LINK stub 00:04:32.161 CC test/env/vtophys/vtophys.o 00:04:32.420 LINK reset 00:04:32.420 CC test/event/app_repeat/app_repeat.o 00:04:32.420 CC test/event/reactor_perf/reactor_perf.o 00:04:32.420 CXX test/cpp_headers/bdev_module.o 00:04:32.420 LINK vtophys 00:04:32.420 CC test/event/scheduler/scheduler.o 00:04:32.420 CC test/nvme/sgl/sgl.o 00:04:32.420 LINK app_repeat 00:04:32.420 LINK reactor_perf 00:04:32.420 LINK spdk_nvme_perf 00:04:32.679 CXX test/cpp_headers/bdev_zone.o 00:04:32.679 LINK bdevperf 00:04:32.679 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:32.679 LINK scheduler 00:04:32.679 CXX test/cpp_headers/bit_array.o 00:04:32.679 LINK sgl 00:04:32.679 CC test/nvme/e2edp/nvme_dp.o 00:04:32.679 CC app/spdk_nvme_identify/identify.o 00:04:32.679 CC test/thread/poller_perf/poller_perf.o 00:04:32.679 LINK env_dpdk_post_init 00:04:32.938 CXX test/cpp_headers/bit_pool.o 00:04:32.938 CXX test/cpp_headers/blob_bdev.o 00:04:32.938 LINK poller_perf 00:04:32.938 CC test/nvme/overhead/overhead.o 00:04:32.938 CC test/env/memory/memory_ut.o 00:04:32.938 LINK nvme_dp 00:04:32.938 CC examples/blob/hello_world/hello_blob.o 00:04:33.197 CXX test/cpp_headers/blobfs_bdev.o 00:04:33.197 CC test/env/pci/pci_ut.o 00:04:33.197 CC test/nvme/err_injection/err_injection.o 00:04:33.197 LINK hello_blob 00:04:33.197 CC test/nvme/startup/startup.o 00:04:33.197 LINK overhead 00:04:33.197 CXX test/cpp_headers/blobfs.o 00:04:33.197 LINK err_injection 00:04:33.457 CXX test/cpp_headers/blob.o 00:04:33.457 LINK startup 00:04:33.457 LINK spdk_nvme_identify 00:04:33.457 CXX test/cpp_headers/conf.o 00:04:33.457 LINK pci_ut 00:04:33.457 CC test/nvme/reserve/reserve.o 00:04:33.457 CXX test/cpp_headers/config.o 00:04:33.457 CC examples/blob/cli/blobcli.o 00:04:33.715 LINK iscsi_fuzz 00:04:33.715 CXX test/cpp_headers/cpuset.o 00:04:33.715 CC app/spdk_nvme_discover/discovery_aer.o 00:04:33.715 LINK reserve 00:04:33.715 CC examples/ioat/perf/perf.o 00:04:33.715 CC examples/nvme/hello_world/hello_world.o 00:04:33.715 CC examples/ioat/verify/verify.o 00:04:33.974 CXX test/cpp_headers/crc16.o 00:04:33.974 LINK memory_ut 00:04:33.974 LINK spdk_nvme_discover 00:04:33.974 CC test/nvme/simple_copy/simple_copy.o 00:04:33.974 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:33.974 LINK ioat_perf 00:04:33.974 CXX test/cpp_headers/crc32.o 00:04:33.974 LINK verify 00:04:33.974 LINK blobcli 00:04:33.974 LINK hello_world 00:04:33.974 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:33.974 CC app/spdk_top/spdk_top.o 00:04:34.232 CC app/vhost/vhost.o 00:04:34.232 CXX test/cpp_headers/crc64.o 00:04:34.232 LINK simple_copy 00:04:34.232 CC examples/sock/hello_world/hello_sock.o 00:04:34.232 CC examples/nvme/reconnect/reconnect.o 00:04:34.232 CC examples/vmd/lsvmd/lsvmd.o 00:04:34.232 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:34.232 CXX test/cpp_headers/dif.o 00:04:34.232 LINK vhost 00:04:34.489 LINK lsvmd 00:04:34.489 CC test/nvme/connect_stress/connect_stress.o 00:04:34.489 LINK vhost_fuzz 00:04:34.489 CXX test/cpp_headers/dma.o 00:04:34.489 LINK hello_sock 00:04:34.489 CC examples/nvme/arbitration/arbitration.o 00:04:34.489 LINK reconnect 00:04:34.746 LINK connect_stress 00:04:34.746 CC examples/vmd/led/led.o 00:04:34.746 CXX test/cpp_headers/endian.o 00:04:34.746 CXX test/cpp_headers/env_dpdk.o 00:04:34.746 CC app/spdk_dd/spdk_dd.o 00:04:34.746 CC examples/nvmf/nvmf/nvmf.o 00:04:34.746 LINK led 00:04:34.746 LINK nvme_manage 00:04:34.746 CC test/nvme/boot_partition/boot_partition.o 00:04:35.003 LINK arbitration 00:04:35.003 CXX test/cpp_headers/env.o 00:04:35.003 CC app/fio/nvme/fio_plugin.o 00:04:35.003 LINK spdk_top 00:04:35.003 CC app/fio/bdev/fio_plugin.o 00:04:35.003 LINK boot_partition 00:04:35.003 CC examples/util/zipf/zipf.o 00:04:35.003 CXX test/cpp_headers/event.o 00:04:35.003 LINK nvmf 00:04:35.003 CXX test/cpp_headers/fd_group.o 00:04:35.260 CC examples/nvme/hotplug/hotplug.o 00:04:35.260 LINK spdk_dd 00:04:35.260 LINK zipf 00:04:35.260 CC test/nvme/compliance/nvme_compliance.o 00:04:35.260 CXX test/cpp_headers/fd.o 00:04:35.260 CC test/nvme/fused_ordering/fused_ordering.o 00:04:35.260 LINK hotplug 00:04:35.518 CC examples/idxd/perf/perf.o 00:04:35.518 CXX test/cpp_headers/file.o 00:04:35.518 CC examples/thread/thread/thread_ex.o 00:04:35.518 LINK spdk_nvme 00:04:35.518 LINK spdk_bdev 00:04:35.518 LINK fused_ordering 00:04:35.518 LINK nvme_compliance 00:04:35.518 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:35.518 CXX test/cpp_headers/ftl.o 00:04:35.775 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:35.775 LINK thread 00:04:35.775 CC examples/nvme/abort/abort.o 00:04:35.775 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:35.775 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:35.775 LINK idxd_perf 00:04:35.775 LINK cmb_copy 00:04:35.775 CXX test/cpp_headers/gpt_spec.o 00:04:35.775 LINK interrupt_tgt 00:04:36.033 CXX test/cpp_headers/hexlify.o 00:04:36.033 LINK pmr_persistence 00:04:36.033 LINK doorbell_aers 00:04:36.033 CXX test/cpp_headers/histogram_data.o 00:04:36.033 CC test/nvme/fdp/fdp.o 00:04:36.033 CC test/nvme/cuse/cuse.o 00:04:36.033 CXX test/cpp_headers/idxd.o 00:04:36.033 CXX test/cpp_headers/idxd_spec.o 00:04:36.033 CXX test/cpp_headers/init.o 00:04:36.033 LINK abort 00:04:36.033 CXX test/cpp_headers/ioat.o 00:04:36.033 CXX test/cpp_headers/ioat_spec.o 00:04:36.291 CXX test/cpp_headers/iscsi_spec.o 00:04:36.291 CXX test/cpp_headers/json.o 00:04:36.291 CXX test/cpp_headers/jsonrpc.o 00:04:36.291 CXX test/cpp_headers/likely.o 00:04:36.292 LINK fdp 00:04:36.292 CXX test/cpp_headers/log.o 00:04:36.292 CXX test/cpp_headers/lvol.o 00:04:36.292 LINK esnap 00:04:36.292 CXX test/cpp_headers/memory.o 00:04:36.292 CXX test/cpp_headers/mmio.o 00:04:36.292 CXX test/cpp_headers/nbd.o 00:04:36.292 CXX test/cpp_headers/notify.o 00:04:36.292 CXX test/cpp_headers/nvme.o 00:04:36.551 CXX test/cpp_headers/nvme_intel.o 00:04:36.551 CXX test/cpp_headers/nvme_ocssd.o 00:04:36.551 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:36.551 CXX test/cpp_headers/nvme_spec.o 00:04:36.551 CXX test/cpp_headers/nvmf_cmd.o 00:04:36.808 CXX test/cpp_headers/nvme_zns.o 00:04:36.808 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:36.808 CXX test/cpp_headers/nvmf.o 00:04:36.808 CXX test/cpp_headers/nvmf_spec.o 00:04:36.808 CXX test/cpp_headers/nvmf_transport.o 00:04:36.808 CXX test/cpp_headers/opal.o 00:04:36.808 CXX test/cpp_headers/opal_spec.o 00:04:36.808 CXX test/cpp_headers/pci_ids.o 00:04:36.808 CXX test/cpp_headers/pipe.o 00:04:36.808 CXX test/cpp_headers/queue.o 00:04:37.065 CXX test/cpp_headers/reduce.o 00:04:37.065 CXX test/cpp_headers/rpc.o 00:04:37.065 CXX test/cpp_headers/scheduler.o 00:04:37.065 CXX test/cpp_headers/scsi.o 00:04:37.065 CXX test/cpp_headers/scsi_spec.o 00:04:37.065 CXX test/cpp_headers/sock.o 00:04:37.065 CXX test/cpp_headers/stdinc.o 00:04:37.065 CXX test/cpp_headers/string.o 00:04:37.065 CXX test/cpp_headers/thread.o 00:04:37.065 CXX test/cpp_headers/trace.o 00:04:37.065 CXX test/cpp_headers/trace_parser.o 00:04:37.065 CXX test/cpp_headers/tree.o 00:04:37.065 CXX test/cpp_headers/ublk.o 00:04:37.065 LINK cuse 00:04:37.323 CXX test/cpp_headers/util.o 00:04:37.323 CXX test/cpp_headers/uuid.o 00:04:37.324 CXX test/cpp_headers/version.o 00:04:37.324 CXX test/cpp_headers/vfio_user_pci.o 00:04:37.324 CXX test/cpp_headers/vfio_user_spec.o 00:04:37.324 CXX test/cpp_headers/vhost.o 00:04:37.324 CXX test/cpp_headers/vmd.o 00:04:37.324 CXX test/cpp_headers/xor.o 00:04:37.324 CXX test/cpp_headers/zipf.o 00:04:41.508 00:04:41.508 real 0m52.434s 00:04:41.508 user 4m53.956s 00:04:41.508 sys 1m5.840s 00:04:41.508 07:55:52 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:41.508 07:55:52 -- common/autotest_common.sh@10 -- $ set +x 00:04:41.508 ************************************ 00:04:41.508 END TEST make 00:04:41.508 ************************************ 00:04:41.767 07:55:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:41.767 07:55:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:41.767 07:55:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:41.767 07:55:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:41.767 07:55:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:41.767 07:55:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:41.767 07:55:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:41.767 07:55:52 -- scripts/common.sh@335 -- # IFS=.-: 00:04:41.767 07:55:52 -- scripts/common.sh@335 -- # read -ra ver1 00:04:41.767 07:55:52 -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.767 07:55:52 -- scripts/common.sh@336 -- # read -ra ver2 00:04:41.767 07:55:52 -- scripts/common.sh@337 -- # local 'op=<' 00:04:41.767 07:55:52 -- scripts/common.sh@339 -- # ver1_l=2 00:04:41.767 07:55:52 -- scripts/common.sh@340 -- # ver2_l=1 00:04:41.767 07:55:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:41.767 07:55:52 -- scripts/common.sh@343 -- # case "$op" in 00:04:41.767 07:55:52 -- scripts/common.sh@344 -- # : 1 00:04:41.767 07:55:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:41.767 07:55:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.767 07:55:52 -- scripts/common.sh@364 -- # decimal 1 00:04:41.767 07:55:52 -- scripts/common.sh@352 -- # local d=1 00:04:41.767 07:55:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.767 07:55:52 -- scripts/common.sh@354 -- # echo 1 00:04:41.767 07:55:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:41.767 07:55:52 -- scripts/common.sh@365 -- # decimal 2 00:04:41.767 07:55:52 -- scripts/common.sh@352 -- # local d=2 00:04:41.767 07:55:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.767 07:55:52 -- scripts/common.sh@354 -- # echo 2 00:04:41.767 07:55:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:41.767 07:55:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:41.767 07:55:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:41.767 07:55:52 -- scripts/common.sh@367 -- # return 0 00:04:41.767 07:55:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.767 07:55:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:41.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.767 --rc genhtml_branch_coverage=1 00:04:41.768 --rc genhtml_function_coverage=1 00:04:41.768 --rc genhtml_legend=1 00:04:41.768 --rc geninfo_all_blocks=1 00:04:41.768 --rc geninfo_unexecuted_blocks=1 00:04:41.768 00:04:41.768 ' 00:04:41.768 07:55:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:41.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.768 --rc genhtml_branch_coverage=1 00:04:41.768 --rc genhtml_function_coverage=1 00:04:41.768 --rc genhtml_legend=1 00:04:41.768 --rc geninfo_all_blocks=1 00:04:41.768 --rc geninfo_unexecuted_blocks=1 00:04:41.768 00:04:41.768 ' 00:04:41.768 07:55:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:41.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.768 --rc genhtml_branch_coverage=1 00:04:41.768 --rc genhtml_function_coverage=1 00:04:41.768 --rc genhtml_legend=1 00:04:41.768 --rc geninfo_all_blocks=1 00:04:41.768 --rc geninfo_unexecuted_blocks=1 00:04:41.768 00:04:41.768 ' 00:04:41.768 07:55:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:41.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.768 --rc genhtml_branch_coverage=1 00:04:41.768 --rc genhtml_function_coverage=1 00:04:41.768 --rc genhtml_legend=1 00:04:41.768 --rc geninfo_all_blocks=1 00:04:41.768 --rc geninfo_unexecuted_blocks=1 00:04:41.768 00:04:41.768 ' 00:04:41.768 07:55:52 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.768 07:55:52 -- nvmf/common.sh@7 -- # uname -s 00:04:41.768 07:55:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.768 07:55:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.768 07:55:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.768 07:55:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.768 07:55:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.768 07:55:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.768 07:55:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.768 07:55:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.768 07:55:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.768 07:55:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.768 07:55:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:04:41.768 07:55:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:04:41.768 07:55:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.768 07:55:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.768 07:55:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:41.768 07:55:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.768 07:55:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.768 07:55:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.768 07:55:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.768 07:55:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.768 07:55:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.768 07:55:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.768 07:55:52 -- paths/export.sh@5 -- # export PATH 00:04:41.768 07:55:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.768 07:55:52 -- nvmf/common.sh@46 -- # : 0 00:04:41.768 07:55:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:41.768 07:55:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:41.768 07:55:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:41.768 07:55:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.768 07:55:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.768 07:55:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:41.768 07:55:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:41.768 07:55:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:41.768 07:55:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:41.768 07:55:52 -- spdk/autotest.sh@32 -- # uname -s 00:04:41.768 07:55:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:41.768 07:55:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:41.768 07:55:52 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:41.768 07:55:52 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:41.768 07:55:52 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:41.768 07:55:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:41.768 07:55:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:41.768 07:55:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:41.768 07:55:53 -- spdk/autotest.sh@48 -- # udevadm_pid=61828 00:04:41.768 07:55:53 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:41.768 07:55:53 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:41.768 07:55:53 -- spdk/autotest.sh@54 -- # echo 61831 00:04:41.768 07:55:53 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:41.768 07:55:53 -- spdk/autotest.sh@56 -- # echo 61832 00:04:41.768 07:55:53 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:41.768 07:55:53 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:41.768 07:55:53 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:41.768 07:55:53 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:41.768 07:55:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:41.768 07:55:53 -- common/autotest_common.sh@10 -- # set +x 00:04:41.768 07:55:53 -- spdk/autotest.sh@70 -- # create_test_list 00:04:41.768 07:55:53 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:41.768 07:55:53 -- common/autotest_common.sh@10 -- # set +x 00:04:42.026 07:55:53 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:42.026 07:55:53 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:42.026 07:55:53 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:42.026 07:55:53 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:42.026 07:55:53 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:42.026 07:55:53 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:42.026 07:55:53 -- common/autotest_common.sh@1450 -- # uname 00:04:42.026 07:55:53 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:42.026 07:55:53 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:42.026 07:55:53 -- common/autotest_common.sh@1470 -- # uname 00:04:42.026 07:55:53 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:42.026 07:55:53 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:42.026 07:55:53 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:42.026 lcov: LCOV version 1.15 00:04:42.026 07:55:53 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:50.175 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:50.175 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:50.175 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:50.175 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:50.175 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:50.175 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:08.357 07:56:18 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:05:08.357 07:56:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:08.357 07:56:18 -- common/autotest_common.sh@10 -- # set +x 00:05:08.357 07:56:18 -- spdk/autotest.sh@89 -- # rm -f 00:05:08.357 07:56:18 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:08.357 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.357 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:08.357 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:05:08.357 07:56:19 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:05:08.357 07:56:19 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:08.357 07:56:19 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:08.357 07:56:19 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:08.357 07:56:19 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:08.357 07:56:19 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:08.357 07:56:19 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:08.357 07:56:19 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:08.357 07:56:19 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:08.357 07:56:19 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:08.357 07:56:19 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:08.357 07:56:19 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:08.357 07:56:19 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:08.357 07:56:19 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:08.357 07:56:19 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:08.357 07:56:19 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:08.357 07:56:19 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:08.357 07:56:19 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:08.357 07:56:19 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:08.357 07:56:19 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:08.357 07:56:19 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:08.357 07:56:19 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:08.357 07:56:19 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:08.357 07:56:19 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:08.357 07:56:19 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:05:08.357 07:56:19 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:05:08.357 07:56:19 -- spdk/autotest.sh@108 -- # grep -v p 00:05:08.357 07:56:19 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:08.357 07:56:19 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:08.357 07:56:19 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:05:08.357 07:56:19 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:08.357 07:56:19 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:08.357 No valid GPT data, bailing 00:05:08.357 07:56:19 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:08.357 07:56:19 -- scripts/common.sh@393 -- # pt= 00:05:08.358 07:56:19 -- scripts/common.sh@394 -- # return 1 00:05:08.358 07:56:19 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:08.358 1+0 records in 00:05:08.358 1+0 records out 00:05:08.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00479817 s, 219 MB/s 00:05:08.358 07:56:19 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:08.358 07:56:19 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:08.358 07:56:19 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:05:08.358 07:56:19 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:05:08.358 07:56:19 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:08.358 No valid GPT data, bailing 00:05:08.358 07:56:19 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:08.358 07:56:19 -- scripts/common.sh@393 -- # pt= 00:05:08.358 07:56:19 -- scripts/common.sh@394 -- # return 1 00:05:08.358 07:56:19 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:08.358 1+0 records in 00:05:08.358 1+0 records out 00:05:08.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00468856 s, 224 MB/s 00:05:08.358 07:56:19 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:08.358 07:56:19 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:08.358 07:56:19 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:05:08.358 07:56:19 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:05:08.358 07:56:19 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:08.358 No valid GPT data, bailing 00:05:08.358 07:56:19 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:08.358 07:56:19 -- scripts/common.sh@393 -- # pt= 00:05:08.358 07:56:19 -- scripts/common.sh@394 -- # return 1 00:05:08.358 07:56:19 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:08.358 1+0 records in 00:05:08.358 1+0 records out 00:05:08.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499803 s, 210 MB/s 00:05:08.358 07:56:19 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:08.358 07:56:19 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:08.358 07:56:19 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:05:08.358 07:56:19 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:05:08.358 07:56:19 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:08.358 No valid GPT data, bailing 00:05:08.358 07:56:19 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:08.358 07:56:19 -- scripts/common.sh@393 -- # pt= 00:05:08.358 07:56:19 -- scripts/common.sh@394 -- # return 1 00:05:08.358 07:56:19 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:08.358 1+0 records in 00:05:08.358 1+0 records out 00:05:08.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00468675 s, 224 MB/s 00:05:08.358 07:56:19 -- spdk/autotest.sh@116 -- # sync 00:05:08.358 07:56:19 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:08.358 07:56:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:08.358 07:56:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:10.257 07:56:21 -- spdk/autotest.sh@122 -- # uname -s 00:05:10.257 07:56:21 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:05:10.257 07:56:21 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:10.257 07:56:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.257 07:56:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.257 07:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:10.257 ************************************ 00:05:10.257 START TEST setup.sh 00:05:10.257 ************************************ 00:05:10.257 07:56:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:10.257 * Looking for test storage... 00:05:10.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:10.257 07:56:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:10.257 07:56:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:10.257 07:56:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:10.516 07:56:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:10.516 07:56:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:10.516 07:56:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:10.516 07:56:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:10.516 07:56:21 -- scripts/common.sh@335 -- # IFS=.-: 00:05:10.516 07:56:21 -- scripts/common.sh@335 -- # read -ra ver1 00:05:10.516 07:56:21 -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.516 07:56:21 -- scripts/common.sh@336 -- # read -ra ver2 00:05:10.516 07:56:21 -- scripts/common.sh@337 -- # local 'op=<' 00:05:10.516 07:56:21 -- scripts/common.sh@339 -- # ver1_l=2 00:05:10.516 07:56:21 -- scripts/common.sh@340 -- # ver2_l=1 00:05:10.516 07:56:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:10.516 07:56:21 -- scripts/common.sh@343 -- # case "$op" in 00:05:10.516 07:56:21 -- scripts/common.sh@344 -- # : 1 00:05:10.516 07:56:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:10.516 07:56:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.516 07:56:21 -- scripts/common.sh@364 -- # decimal 1 00:05:10.516 07:56:21 -- scripts/common.sh@352 -- # local d=1 00:05:10.516 07:56:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.516 07:56:21 -- scripts/common.sh@354 -- # echo 1 00:05:10.516 07:56:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:10.516 07:56:21 -- scripts/common.sh@365 -- # decimal 2 00:05:10.516 07:56:21 -- scripts/common.sh@352 -- # local d=2 00:05:10.516 07:56:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.516 07:56:21 -- scripts/common.sh@354 -- # echo 2 00:05:10.516 07:56:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:10.516 07:56:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:10.516 07:56:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:10.516 07:56:21 -- scripts/common.sh@367 -- # return 0 00:05:10.516 07:56:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.516 07:56:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:10.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.516 --rc genhtml_branch_coverage=1 00:05:10.516 --rc genhtml_function_coverage=1 00:05:10.516 --rc genhtml_legend=1 00:05:10.516 --rc geninfo_all_blocks=1 00:05:10.516 --rc geninfo_unexecuted_blocks=1 00:05:10.516 00:05:10.516 ' 00:05:10.516 07:56:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:10.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.516 --rc genhtml_branch_coverage=1 00:05:10.516 --rc genhtml_function_coverage=1 00:05:10.516 --rc genhtml_legend=1 00:05:10.516 --rc geninfo_all_blocks=1 00:05:10.516 --rc geninfo_unexecuted_blocks=1 00:05:10.516 00:05:10.516 ' 00:05:10.516 07:56:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:10.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.516 --rc genhtml_branch_coverage=1 00:05:10.516 --rc genhtml_function_coverage=1 00:05:10.516 --rc genhtml_legend=1 00:05:10.516 --rc geninfo_all_blocks=1 00:05:10.516 --rc geninfo_unexecuted_blocks=1 00:05:10.516 00:05:10.516 ' 00:05:10.516 07:56:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:10.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.516 --rc genhtml_branch_coverage=1 00:05:10.516 --rc genhtml_function_coverage=1 00:05:10.516 --rc genhtml_legend=1 00:05:10.516 --rc geninfo_all_blocks=1 00:05:10.516 --rc geninfo_unexecuted_blocks=1 00:05:10.516 00:05:10.516 ' 00:05:10.516 07:56:21 -- setup/test-setup.sh@10 -- # uname -s 00:05:10.516 07:56:21 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:10.516 07:56:21 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:10.516 07:56:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:10.516 07:56:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:10.516 07:56:21 -- common/autotest_common.sh@10 -- # set +x 00:05:10.516 ************************************ 00:05:10.516 START TEST acl 00:05:10.516 ************************************ 00:05:10.516 07:56:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:10.516 * Looking for test storage... 00:05:10.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:10.516 07:56:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:10.516 07:56:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:10.516 07:56:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:10.516 07:56:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:10.516 07:56:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:10.516 07:56:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:10.516 07:56:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:10.516 07:56:21 -- scripts/common.sh@335 -- # IFS=.-: 00:05:10.516 07:56:21 -- scripts/common.sh@335 -- # read -ra ver1 00:05:10.516 07:56:21 -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.516 07:56:21 -- scripts/common.sh@336 -- # read -ra ver2 00:05:10.516 07:56:21 -- scripts/common.sh@337 -- # local 'op=<' 00:05:10.516 07:56:21 -- scripts/common.sh@339 -- # ver1_l=2 00:05:10.516 07:56:21 -- scripts/common.sh@340 -- # ver2_l=1 00:05:10.516 07:56:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:10.516 07:56:21 -- scripts/common.sh@343 -- # case "$op" in 00:05:10.516 07:56:21 -- scripts/common.sh@344 -- # : 1 00:05:10.516 07:56:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:10.516 07:56:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.516 07:56:21 -- scripts/common.sh@364 -- # decimal 1 00:05:10.516 07:56:21 -- scripts/common.sh@352 -- # local d=1 00:05:10.516 07:56:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.516 07:56:21 -- scripts/common.sh@354 -- # echo 1 00:05:10.516 07:56:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:10.516 07:56:21 -- scripts/common.sh@365 -- # decimal 2 00:05:10.516 07:56:21 -- scripts/common.sh@352 -- # local d=2 00:05:10.516 07:56:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.516 07:56:21 -- scripts/common.sh@354 -- # echo 2 00:05:10.516 07:56:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:10.516 07:56:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:10.516 07:56:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:10.516 07:56:21 -- scripts/common.sh@367 -- # return 0 00:05:10.516 07:56:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.516 07:56:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:10.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.516 --rc genhtml_branch_coverage=1 00:05:10.516 --rc genhtml_function_coverage=1 00:05:10.516 --rc genhtml_legend=1 00:05:10.516 --rc geninfo_all_blocks=1 00:05:10.516 --rc geninfo_unexecuted_blocks=1 00:05:10.516 00:05:10.516 ' 00:05:10.516 07:56:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:10.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.516 --rc genhtml_branch_coverage=1 00:05:10.516 --rc genhtml_function_coverage=1 00:05:10.516 --rc genhtml_legend=1 00:05:10.516 --rc geninfo_all_blocks=1 00:05:10.516 --rc geninfo_unexecuted_blocks=1 00:05:10.516 00:05:10.516 ' 00:05:10.516 07:56:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:10.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.516 --rc genhtml_branch_coverage=1 00:05:10.516 --rc genhtml_function_coverage=1 00:05:10.516 --rc genhtml_legend=1 00:05:10.516 --rc geninfo_all_blocks=1 00:05:10.516 --rc geninfo_unexecuted_blocks=1 00:05:10.516 00:05:10.516 ' 00:05:10.516 07:56:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:10.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.516 --rc genhtml_branch_coverage=1 00:05:10.516 --rc genhtml_function_coverage=1 00:05:10.516 --rc genhtml_legend=1 00:05:10.516 --rc geninfo_all_blocks=1 00:05:10.517 --rc geninfo_unexecuted_blocks=1 00:05:10.517 00:05:10.517 ' 00:05:10.517 07:56:21 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:10.517 07:56:21 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:10.517 07:56:21 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:10.517 07:56:21 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:10.517 07:56:21 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:10.517 07:56:21 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:10.517 07:56:21 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:10.517 07:56:21 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:10.517 07:56:21 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:10.517 07:56:21 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:10.517 07:56:21 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:10.517 07:56:21 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:10.517 07:56:21 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:10.517 07:56:21 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:10.517 07:56:21 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:10.517 07:56:21 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:10.517 07:56:21 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:10.517 07:56:21 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:10.517 07:56:21 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:10.517 07:56:21 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:10.517 07:56:21 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:10.517 07:56:21 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:10.517 07:56:21 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:10.517 07:56:21 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:10.517 07:56:21 -- setup/acl.sh@12 -- # devs=() 00:05:10.517 07:56:21 -- setup/acl.sh@12 -- # declare -a devs 00:05:10.517 07:56:21 -- setup/acl.sh@13 -- # drivers=() 00:05:10.517 07:56:21 -- setup/acl.sh@13 -- # declare -A drivers 00:05:10.517 07:56:21 -- setup/acl.sh@51 -- # setup reset 00:05:10.517 07:56:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:10.517 07:56:21 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:11.451 07:56:22 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:11.451 07:56:22 -- setup/acl.sh@16 -- # local dev driver 00:05:11.451 07:56:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:11.451 07:56:22 -- setup/acl.sh@15 -- # setup output status 00:05:11.451 07:56:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.451 07:56:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:11.451 Hugepages 00:05:11.451 node hugesize free / total 00:05:11.451 07:56:22 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:11.451 07:56:22 -- setup/acl.sh@19 -- # continue 00:05:11.451 07:56:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:11.451 00:05:11.451 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:11.451 07:56:22 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:11.451 07:56:22 -- setup/acl.sh@19 -- # continue 00:05:11.451 07:56:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:11.451 07:56:22 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:11.451 07:56:22 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:11.451 07:56:22 -- setup/acl.sh@20 -- # continue 00:05:11.451 07:56:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:11.709 07:56:22 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:11.709 07:56:22 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:11.709 07:56:22 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:11.709 07:56:22 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:11.709 07:56:22 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:11.709 07:56:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:11.709 07:56:22 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:11.709 07:56:22 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:11.709 07:56:22 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:11.709 07:56:22 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:11.709 07:56:22 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:11.709 07:56:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:11.709 07:56:22 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:11.709 07:56:22 -- setup/acl.sh@54 -- # run_test denied denied 00:05:11.709 07:56:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.709 07:56:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.709 07:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:11.709 ************************************ 00:05:11.709 START TEST denied 00:05:11.709 ************************************ 00:05:11.709 07:56:22 -- common/autotest_common.sh@1114 -- # denied 00:05:11.709 07:56:22 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:11.709 07:56:22 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:11.709 07:56:22 -- setup/acl.sh@38 -- # setup output config 00:05:11.709 07:56:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.709 07:56:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:12.644 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:12.644 07:56:23 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:12.644 07:56:23 -- setup/acl.sh@28 -- # local dev driver 00:05:12.644 07:56:23 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:12.644 07:56:23 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:12.644 07:56:23 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:12.644 07:56:23 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:12.644 07:56:23 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:12.644 07:56:23 -- setup/acl.sh@41 -- # setup reset 00:05:12.644 07:56:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:12.644 07:56:23 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:13.210 00:05:13.210 real 0m1.478s 00:05:13.210 user 0m0.601s 00:05:13.210 sys 0m0.814s 00:05:13.210 07:56:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.210 07:56:24 -- common/autotest_common.sh@10 -- # set +x 00:05:13.210 ************************************ 00:05:13.210 END TEST denied 00:05:13.210 ************************************ 00:05:13.210 07:56:24 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:13.210 07:56:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.210 07:56:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.210 07:56:24 -- common/autotest_common.sh@10 -- # set +x 00:05:13.210 ************************************ 00:05:13.210 START TEST allowed 00:05:13.210 ************************************ 00:05:13.210 07:56:24 -- common/autotest_common.sh@1114 -- # allowed 00:05:13.210 07:56:24 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:13.210 07:56:24 -- setup/acl.sh@45 -- # setup output config 00:05:13.210 07:56:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.210 07:56:24 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:13.210 07:56:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:14.144 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:14.144 07:56:25 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:14.144 07:56:25 -- setup/acl.sh@28 -- # local dev driver 00:05:14.144 07:56:25 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:14.144 07:56:25 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:14.144 07:56:25 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:14.144 07:56:25 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:14.144 07:56:25 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:14.144 07:56:25 -- setup/acl.sh@48 -- # setup reset 00:05:14.144 07:56:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:14.144 07:56:25 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:14.709 00:05:14.709 real 0m1.536s 00:05:14.709 user 0m0.695s 00:05:14.709 sys 0m0.845s 00:05:14.709 07:56:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.709 07:56:25 -- common/autotest_common.sh@10 -- # set +x 00:05:14.709 ************************************ 00:05:14.709 END TEST allowed 00:05:14.709 ************************************ 00:05:14.709 00:05:14.709 real 0m4.411s 00:05:14.709 user 0m1.952s 00:05:14.709 sys 0m2.432s 00:05:14.709 07:56:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.709 07:56:25 -- common/autotest_common.sh@10 -- # set +x 00:05:14.709 ************************************ 00:05:14.709 END TEST acl 00:05:14.709 ************************************ 00:05:14.968 07:56:26 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:14.968 07:56:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.968 07:56:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.968 07:56:26 -- common/autotest_common.sh@10 -- # set +x 00:05:14.968 ************************************ 00:05:14.968 START TEST hugepages 00:05:14.968 ************************************ 00:05:14.968 07:56:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:14.968 * Looking for test storage... 00:05:14.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:14.968 07:56:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:14.968 07:56:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:14.968 07:56:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:14.968 07:56:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:14.968 07:56:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:14.968 07:56:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:14.968 07:56:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:14.968 07:56:26 -- scripts/common.sh@335 -- # IFS=.-: 00:05:14.968 07:56:26 -- scripts/common.sh@335 -- # read -ra ver1 00:05:14.968 07:56:26 -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.968 07:56:26 -- scripts/common.sh@336 -- # read -ra ver2 00:05:14.969 07:56:26 -- scripts/common.sh@337 -- # local 'op=<' 00:05:14.969 07:56:26 -- scripts/common.sh@339 -- # ver1_l=2 00:05:14.969 07:56:26 -- scripts/common.sh@340 -- # ver2_l=1 00:05:14.969 07:56:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:14.969 07:56:26 -- scripts/common.sh@343 -- # case "$op" in 00:05:14.969 07:56:26 -- scripts/common.sh@344 -- # : 1 00:05:14.969 07:56:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:14.969 07:56:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.969 07:56:26 -- scripts/common.sh@364 -- # decimal 1 00:05:14.969 07:56:26 -- scripts/common.sh@352 -- # local d=1 00:05:14.969 07:56:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.969 07:56:26 -- scripts/common.sh@354 -- # echo 1 00:05:14.969 07:56:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:14.969 07:56:26 -- scripts/common.sh@365 -- # decimal 2 00:05:14.969 07:56:26 -- scripts/common.sh@352 -- # local d=2 00:05:14.969 07:56:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.969 07:56:26 -- scripts/common.sh@354 -- # echo 2 00:05:14.969 07:56:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:14.969 07:56:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:14.969 07:56:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:14.969 07:56:26 -- scripts/common.sh@367 -- # return 0 00:05:14.969 07:56:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.969 07:56:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:14.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.969 --rc genhtml_branch_coverage=1 00:05:14.969 --rc genhtml_function_coverage=1 00:05:14.969 --rc genhtml_legend=1 00:05:14.969 --rc geninfo_all_blocks=1 00:05:14.969 --rc geninfo_unexecuted_blocks=1 00:05:14.969 00:05:14.969 ' 00:05:14.969 07:56:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:14.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.969 --rc genhtml_branch_coverage=1 00:05:14.969 --rc genhtml_function_coverage=1 00:05:14.969 --rc genhtml_legend=1 00:05:14.969 --rc geninfo_all_blocks=1 00:05:14.969 --rc geninfo_unexecuted_blocks=1 00:05:14.969 00:05:14.969 ' 00:05:14.969 07:56:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:14.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.969 --rc genhtml_branch_coverage=1 00:05:14.969 --rc genhtml_function_coverage=1 00:05:14.969 --rc genhtml_legend=1 00:05:14.969 --rc geninfo_all_blocks=1 00:05:14.969 --rc geninfo_unexecuted_blocks=1 00:05:14.969 00:05:14.969 ' 00:05:14.969 07:56:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:14.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.969 --rc genhtml_branch_coverage=1 00:05:14.969 --rc genhtml_function_coverage=1 00:05:14.969 --rc genhtml_legend=1 00:05:14.969 --rc geninfo_all_blocks=1 00:05:14.969 --rc geninfo_unexecuted_blocks=1 00:05:14.969 00:05:14.969 ' 00:05:14.969 07:56:26 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:14.969 07:56:26 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:14.969 07:56:26 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:14.969 07:56:26 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:14.969 07:56:26 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:14.969 07:56:26 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:14.969 07:56:26 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:14.969 07:56:26 -- setup/common.sh@18 -- # local node= 00:05:14.969 07:56:26 -- setup/common.sh@19 -- # local var val 00:05:14.969 07:56:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:14.969 07:56:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.969 07:56:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.969 07:56:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.969 07:56:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.969 07:56:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.969 07:56:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 4416936 kB' 'MemAvailable: 7343196 kB' 'Buffers: 3704 kB' 'Cached: 3125788 kB' 'SwapCached: 0 kB' 'Active: 496436 kB' 'Inactive: 2749812 kB' 'Active(anon): 127268 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749812 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118380 kB' 'Mapped: 51160 kB' 'Shmem: 10512 kB' 'KReclaimable: 88456 kB' 'Slab: 191756 kB' 'SReclaimable: 88456 kB' 'SUnreclaim: 103300 kB' 'KernelStack: 6880 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 319804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.969 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.969 07:56:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # continue 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:14.970 07:56:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:14.970 07:56:26 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:14.970 07:56:26 -- setup/common.sh@33 -- # echo 2048 00:05:14.970 07:56:26 -- setup/common.sh@33 -- # return 0 00:05:15.228 07:56:26 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:15.228 07:56:26 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:15.228 07:56:26 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:15.228 07:56:26 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:15.228 07:56:26 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:15.228 07:56:26 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:15.228 07:56:26 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:15.228 07:56:26 -- setup/hugepages.sh@207 -- # get_nodes 00:05:15.228 07:56:26 -- setup/hugepages.sh@27 -- # local node 00:05:15.228 07:56:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.228 07:56:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:15.229 07:56:26 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:15.229 07:56:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:15.229 07:56:26 -- setup/hugepages.sh@208 -- # clear_hp 00:05:15.229 07:56:26 -- setup/hugepages.sh@37 -- # local node hp 00:05:15.229 07:56:26 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:15.229 07:56:26 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:15.229 07:56:26 -- setup/hugepages.sh@41 -- # echo 0 00:05:15.229 07:56:26 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:15.229 07:56:26 -- setup/hugepages.sh@41 -- # echo 0 00:05:15.229 07:56:26 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:15.229 07:56:26 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:15.229 07:56:26 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:15.229 07:56:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.229 07:56:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.229 07:56:26 -- common/autotest_common.sh@10 -- # set +x 00:05:15.229 ************************************ 00:05:15.229 START TEST default_setup 00:05:15.229 ************************************ 00:05:15.229 07:56:26 -- common/autotest_common.sh@1114 -- # default_setup 00:05:15.229 07:56:26 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:15.229 07:56:26 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:15.229 07:56:26 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:15.229 07:56:26 -- setup/hugepages.sh@51 -- # shift 00:05:15.229 07:56:26 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:15.229 07:56:26 -- setup/hugepages.sh@52 -- # local node_ids 00:05:15.229 07:56:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:15.229 07:56:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:15.229 07:56:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:15.229 07:56:26 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:15.229 07:56:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:15.229 07:56:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:15.229 07:56:26 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:15.229 07:56:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:15.229 07:56:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:15.229 07:56:26 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:15.229 07:56:26 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:15.229 07:56:26 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:15.229 07:56:26 -- setup/hugepages.sh@73 -- # return 0 00:05:15.229 07:56:26 -- setup/hugepages.sh@137 -- # setup output 00:05:15.229 07:56:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.229 07:56:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:15.795 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:15.795 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.057 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.057 07:56:27 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:16.057 07:56:27 -- setup/hugepages.sh@89 -- # local node 00:05:16.057 07:56:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:16.057 07:56:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:16.057 07:56:27 -- setup/hugepages.sh@92 -- # local surp 00:05:16.057 07:56:27 -- setup/hugepages.sh@93 -- # local resv 00:05:16.057 07:56:27 -- setup/hugepages.sh@94 -- # local anon 00:05:16.057 07:56:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:16.057 07:56:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:16.057 07:56:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:16.057 07:56:27 -- setup/common.sh@18 -- # local node= 00:05:16.057 07:56:27 -- setup/common.sh@19 -- # local var val 00:05:16.057 07:56:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.057 07:56:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.057 07:56:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.057 07:56:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.057 07:56:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.057 07:56:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.057 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6536664 kB' 'MemAvailable: 9462780 kB' 'Buffers: 3704 kB' 'Cached: 3125784 kB' 'SwapCached: 0 kB' 'Active: 498128 kB' 'Inactive: 2749816 kB' 'Active(anon): 128960 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119864 kB' 'Mapped: 50976 kB' 'Shmem: 10488 kB' 'KReclaimable: 88160 kB' 'Slab: 191440 kB' 'SReclaimable: 88160 kB' 'SUnreclaim: 103280 kB' 'KernelStack: 6864 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.058 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.058 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.059 07:56:27 -- setup/common.sh@33 -- # echo 0 00:05:16.059 07:56:27 -- setup/common.sh@33 -- # return 0 00:05:16.059 07:56:27 -- setup/hugepages.sh@97 -- # anon=0 00:05:16.059 07:56:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:16.059 07:56:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.059 07:56:27 -- setup/common.sh@18 -- # local node= 00:05:16.059 07:56:27 -- setup/common.sh@19 -- # local var val 00:05:16.059 07:56:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.059 07:56:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.059 07:56:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.059 07:56:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.059 07:56:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.059 07:56:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6537100 kB' 'MemAvailable: 9463228 kB' 'Buffers: 3704 kB' 'Cached: 3125784 kB' 'SwapCached: 0 kB' 'Active: 497512 kB' 'Inactive: 2749828 kB' 'Active(anon): 128344 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749828 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119472 kB' 'Mapped: 50916 kB' 'Shmem: 10488 kB' 'KReclaimable: 88160 kB' 'Slab: 191436 kB' 'SReclaimable: 88160 kB' 'SUnreclaim: 103276 kB' 'KernelStack: 6848 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.059 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.059 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.060 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.060 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.061 07:56:27 -- setup/common.sh@33 -- # echo 0 00:05:16.061 07:56:27 -- setup/common.sh@33 -- # return 0 00:05:16.061 07:56:27 -- setup/hugepages.sh@99 -- # surp=0 00:05:16.061 07:56:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:16.061 07:56:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:16.061 07:56:27 -- setup/common.sh@18 -- # local node= 00:05:16.061 07:56:27 -- setup/common.sh@19 -- # local var val 00:05:16.061 07:56:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.061 07:56:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.061 07:56:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.061 07:56:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.061 07:56:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.061 07:56:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6537100 kB' 'MemAvailable: 9463232 kB' 'Buffers: 3704 kB' 'Cached: 3125784 kB' 'SwapCached: 0 kB' 'Active: 497792 kB' 'Inactive: 2749832 kB' 'Active(anon): 128624 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119792 kB' 'Mapped: 50916 kB' 'Shmem: 10488 kB' 'KReclaimable: 88160 kB' 'Slab: 191436 kB' 'SReclaimable: 88160 kB' 'SUnreclaim: 103276 kB' 'KernelStack: 6848 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.061 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.061 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.062 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.062 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.063 07:56:27 -- setup/common.sh@33 -- # echo 0 00:05:16.063 07:56:27 -- setup/common.sh@33 -- # return 0 00:05:16.063 nr_hugepages=1024 00:05:16.063 07:56:27 -- setup/hugepages.sh@100 -- # resv=0 00:05:16.063 07:56:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:16.063 07:56:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:16.063 resv_hugepages=0 00:05:16.063 surplus_hugepages=0 00:05:16.063 07:56:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:16.063 anon_hugepages=0 00:05:16.063 07:56:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:16.063 07:56:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:16.063 07:56:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:16.063 07:56:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:16.063 07:56:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:16.063 07:56:27 -- setup/common.sh@18 -- # local node= 00:05:16.063 07:56:27 -- setup/common.sh@19 -- # local var val 00:05:16.063 07:56:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.063 07:56:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.063 07:56:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.063 07:56:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.063 07:56:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.063 07:56:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.063 07:56:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6537100 kB' 'MemAvailable: 9463232 kB' 'Buffers: 3704 kB' 'Cached: 3125784 kB' 'SwapCached: 0 kB' 'Active: 497820 kB' 'Inactive: 2749832 kB' 'Active(anon): 128652 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119772 kB' 'Mapped: 50916 kB' 'Shmem: 10488 kB' 'KReclaimable: 88160 kB' 'Slab: 191436 kB' 'SReclaimable: 88160 kB' 'SUnreclaim: 103276 kB' 'KernelStack: 6832 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.063 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.063 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.064 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.064 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.065 07:56:27 -- setup/common.sh@33 -- # echo 1024 00:05:16.065 07:56:27 -- setup/common.sh@33 -- # return 0 00:05:16.065 07:56:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:16.065 07:56:27 -- setup/hugepages.sh@112 -- # get_nodes 00:05:16.065 07:56:27 -- setup/hugepages.sh@27 -- # local node 00:05:16.065 07:56:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.065 07:56:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:16.065 07:56:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:16.065 07:56:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:16.065 07:56:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:16.065 07:56:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:16.065 07:56:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:16.065 07:56:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.065 07:56:27 -- setup/common.sh@18 -- # local node=0 00:05:16.065 07:56:27 -- setup/common.sh@19 -- # local var val 00:05:16.065 07:56:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.065 07:56:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.065 07:56:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:16.065 07:56:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:16.065 07:56:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.065 07:56:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.065 07:56:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6537100 kB' 'MemUsed: 5702012 kB' 'SwapCached: 0 kB' 'Active: 497680 kB' 'Inactive: 2749832 kB' 'Active(anon): 128512 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 3129488 kB' 'Mapped: 50916 kB' 'AnonPages: 119620 kB' 'Shmem: 10488 kB' 'KernelStack: 6832 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88160 kB' 'Slab: 191432 kB' 'SReclaimable: 88160 kB' 'SUnreclaim: 103272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.065 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.065 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.066 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.066 07:56:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.067 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.067 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.067 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.067 07:56:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.067 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.067 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.067 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.067 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.067 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.067 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.067 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.067 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.067 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.067 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.067 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.067 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.067 07:56:27 -- setup/common.sh@33 -- # echo 0 00:05:16.067 07:56:27 -- setup/common.sh@33 -- # return 0 00:05:16.067 node0=1024 expecting 1024 00:05:16.067 07:56:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:16.067 07:56:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:16.067 07:56:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:16.067 07:56:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:16.067 07:56:27 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:16.067 07:56:27 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:16.067 00:05:16.067 real 0m1.021s 00:05:16.067 user 0m0.494s 00:05:16.067 sys 0m0.442s 00:05:16.067 ************************************ 00:05:16.067 END TEST default_setup 00:05:16.067 ************************************ 00:05:16.067 07:56:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.067 07:56:27 -- common/autotest_common.sh@10 -- # set +x 00:05:16.326 07:56:27 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:16.326 07:56:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.326 07:56:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.326 07:56:27 -- common/autotest_common.sh@10 -- # set +x 00:05:16.326 ************************************ 00:05:16.326 START TEST per_node_1G_alloc 00:05:16.326 ************************************ 00:05:16.326 07:56:27 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:05:16.326 07:56:27 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:16.326 07:56:27 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:16.326 07:56:27 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:16.326 07:56:27 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:16.326 07:56:27 -- setup/hugepages.sh@51 -- # shift 00:05:16.326 07:56:27 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:16.326 07:56:27 -- setup/hugepages.sh@52 -- # local node_ids 00:05:16.326 07:56:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:16.326 07:56:27 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:16.326 07:56:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:16.326 07:56:27 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:16.326 07:56:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:16.326 07:56:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:16.326 07:56:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:16.326 07:56:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:16.326 07:56:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:16.326 07:56:27 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:16.326 07:56:27 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:16.326 07:56:27 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:16.326 07:56:27 -- setup/hugepages.sh@73 -- # return 0 00:05:16.326 07:56:27 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:16.326 07:56:27 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:16.326 07:56:27 -- setup/hugepages.sh@146 -- # setup output 00:05:16.326 07:56:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.326 07:56:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:16.587 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.587 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:16.587 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:16.587 07:56:27 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:16.587 07:56:27 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:16.587 07:56:27 -- setup/hugepages.sh@89 -- # local node 00:05:16.587 07:56:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:16.587 07:56:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:16.588 07:56:27 -- setup/hugepages.sh@92 -- # local surp 00:05:16.588 07:56:27 -- setup/hugepages.sh@93 -- # local resv 00:05:16.588 07:56:27 -- setup/hugepages.sh@94 -- # local anon 00:05:16.588 07:56:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:16.588 07:56:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:16.588 07:56:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:16.588 07:56:27 -- setup/common.sh@18 -- # local node= 00:05:16.588 07:56:27 -- setup/common.sh@19 -- # local var val 00:05:16.588 07:56:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.588 07:56:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.588 07:56:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.588 07:56:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.588 07:56:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.588 07:56:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7585488 kB' 'MemAvailable: 10511620 kB' 'Buffers: 3704 kB' 'Cached: 3125784 kB' 'SwapCached: 0 kB' 'Active: 498176 kB' 'Inactive: 2749832 kB' 'Active(anon): 129008 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120080 kB' 'Mapped: 51040 kB' 'Shmem: 10488 kB' 'KReclaimable: 88156 kB' 'Slab: 191460 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103304 kB' 'KernelStack: 6836 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.588 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.588 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:16.589 07:56:27 -- setup/common.sh@33 -- # echo 0 00:05:16.589 07:56:27 -- setup/common.sh@33 -- # return 0 00:05:16.589 07:56:27 -- setup/hugepages.sh@97 -- # anon=0 00:05:16.589 07:56:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:16.589 07:56:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.589 07:56:27 -- setup/common.sh@18 -- # local node= 00:05:16.589 07:56:27 -- setup/common.sh@19 -- # local var val 00:05:16.589 07:56:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.589 07:56:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.589 07:56:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.589 07:56:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.589 07:56:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.589 07:56:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7585488 kB' 'MemAvailable: 10511620 kB' 'Buffers: 3704 kB' 'Cached: 3125784 kB' 'SwapCached: 0 kB' 'Active: 497492 kB' 'Inactive: 2749832 kB' 'Active(anon): 128324 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119504 kB' 'Mapped: 50916 kB' 'Shmem: 10488 kB' 'KReclaimable: 88156 kB' 'Slab: 191460 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103304 kB' 'KernelStack: 6848 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.589 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.589 07:56:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.590 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.590 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.591 07:56:27 -- setup/common.sh@33 -- # echo 0 00:05:16.591 07:56:27 -- setup/common.sh@33 -- # return 0 00:05:16.591 07:56:27 -- setup/hugepages.sh@99 -- # surp=0 00:05:16.591 07:56:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:16.591 07:56:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:16.591 07:56:27 -- setup/common.sh@18 -- # local node= 00:05:16.591 07:56:27 -- setup/common.sh@19 -- # local var val 00:05:16.591 07:56:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.591 07:56:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.591 07:56:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.591 07:56:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.591 07:56:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.591 07:56:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7585488 kB' 'MemAvailable: 10511620 kB' 'Buffers: 3704 kB' 'Cached: 3125784 kB' 'SwapCached: 0 kB' 'Active: 497604 kB' 'Inactive: 2749832 kB' 'Active(anon): 128436 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119604 kB' 'Mapped: 50916 kB' 'Shmem: 10488 kB' 'KReclaimable: 88156 kB' 'Slab: 191460 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103304 kB' 'KernelStack: 6848 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.591 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.591 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.592 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.592 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:16.593 07:56:27 -- setup/common.sh@33 -- # echo 0 00:05:16.593 07:56:27 -- setup/common.sh@33 -- # return 0 00:05:16.593 nr_hugepages=512 00:05:16.593 07:56:27 -- setup/hugepages.sh@100 -- # resv=0 00:05:16.593 07:56:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:16.593 resv_hugepages=0 00:05:16.593 07:56:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:16.593 surplus_hugepages=0 00:05:16.593 anon_hugepages=0 00:05:16.593 07:56:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:16.593 07:56:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:16.593 07:56:27 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:16.593 07:56:27 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:16.593 07:56:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:16.593 07:56:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:16.593 07:56:27 -- setup/common.sh@18 -- # local node= 00:05:16.593 07:56:27 -- setup/common.sh@19 -- # local var val 00:05:16.593 07:56:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.593 07:56:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.593 07:56:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.593 07:56:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.593 07:56:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.593 07:56:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.593 07:56:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7585488 kB' 'MemAvailable: 10511620 kB' 'Buffers: 3704 kB' 'Cached: 3125784 kB' 'SwapCached: 0 kB' 'Active: 497732 kB' 'Inactive: 2749832 kB' 'Active(anon): 128564 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119676 kB' 'Mapped: 50916 kB' 'Shmem: 10488 kB' 'KReclaimable: 88156 kB' 'Slab: 191456 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103300 kB' 'KernelStack: 6832 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.593 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.593 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.853 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.854 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.854 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:16.855 07:56:27 -- setup/common.sh@33 -- # echo 512 00:05:16.855 07:56:27 -- setup/common.sh@33 -- # return 0 00:05:16.855 07:56:27 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:16.855 07:56:27 -- setup/hugepages.sh@112 -- # get_nodes 00:05:16.855 07:56:27 -- setup/hugepages.sh@27 -- # local node 00:05:16.855 07:56:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.855 07:56:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:16.855 07:56:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:16.855 07:56:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:16.855 07:56:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:16.855 07:56:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:16.855 07:56:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:16.855 07:56:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:16.855 07:56:27 -- setup/common.sh@18 -- # local node=0 00:05:16.855 07:56:27 -- setup/common.sh@19 -- # local var val 00:05:16.855 07:56:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.855 07:56:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.855 07:56:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:16.855 07:56:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:16.855 07:56:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.855 07:56:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.855 07:56:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7585488 kB' 'MemUsed: 4653624 kB' 'SwapCached: 0 kB' 'Active: 497692 kB' 'Inactive: 2749832 kB' 'Active(anon): 128524 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 3129488 kB' 'Mapped: 50916 kB' 'AnonPages: 119632 kB' 'Shmem: 10488 kB' 'KernelStack: 6832 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88156 kB' 'Slab: 191436 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103280 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.855 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.855 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.856 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.856 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.857 07:56:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.857 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.857 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.857 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.857 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.857 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.857 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.857 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.857 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.857 07:56:27 -- setup/common.sh@32 -- # continue 00:05:16.857 07:56:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.857 07:56:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.857 07:56:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:16.857 07:56:27 -- setup/common.sh@33 -- # echo 0 00:05:16.857 07:56:27 -- setup/common.sh@33 -- # return 0 00:05:16.857 node0=512 expecting 512 00:05:16.857 ************************************ 00:05:16.857 END TEST per_node_1G_alloc 00:05:16.857 ************************************ 00:05:16.857 07:56:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:16.857 07:56:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:16.857 07:56:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:16.857 07:56:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:16.857 07:56:27 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:16.857 07:56:27 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:16.857 00:05:16.857 real 0m0.573s 00:05:16.857 user 0m0.281s 00:05:16.857 sys 0m0.294s 00:05:16.857 07:56:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.857 07:56:27 -- common/autotest_common.sh@10 -- # set +x 00:05:16.857 07:56:27 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:16.857 07:56:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.857 07:56:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.857 07:56:27 -- common/autotest_common.sh@10 -- # set +x 00:05:16.857 ************************************ 00:05:16.857 START TEST even_2G_alloc 00:05:16.857 ************************************ 00:05:16.857 07:56:27 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:05:16.857 07:56:27 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:16.857 07:56:27 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:16.857 07:56:27 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:16.857 07:56:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:16.857 07:56:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:16.857 07:56:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:16.857 07:56:27 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:16.857 07:56:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:16.857 07:56:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:16.857 07:56:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:16.857 07:56:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:16.857 07:56:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:16.857 07:56:27 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:16.857 07:56:27 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:16.857 07:56:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:16.857 07:56:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:16.857 07:56:27 -- setup/hugepages.sh@83 -- # : 0 00:05:16.857 07:56:27 -- setup/hugepages.sh@84 -- # : 0 00:05:16.857 07:56:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:16.857 07:56:27 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:16.857 07:56:27 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:16.857 07:56:27 -- setup/hugepages.sh@153 -- # setup output 00:05:16.857 07:56:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.857 07:56:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:17.116 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.116 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.116 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.116 07:56:28 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:17.117 07:56:28 -- setup/hugepages.sh@89 -- # local node 00:05:17.117 07:56:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:17.117 07:56:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:17.117 07:56:28 -- setup/hugepages.sh@92 -- # local surp 00:05:17.117 07:56:28 -- setup/hugepages.sh@93 -- # local resv 00:05:17.117 07:56:28 -- setup/hugepages.sh@94 -- # local anon 00:05:17.117 07:56:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:17.117 07:56:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:17.117 07:56:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:17.117 07:56:28 -- setup/common.sh@18 -- # local node= 00:05:17.117 07:56:28 -- setup/common.sh@19 -- # local var val 00:05:17.117 07:56:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.117 07:56:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.117 07:56:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.117 07:56:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.117 07:56:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.117 07:56:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.117 07:56:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6532424 kB' 'MemAvailable: 9458556 kB' 'Buffers: 3704 kB' 'Cached: 3125784 kB' 'SwapCached: 0 kB' 'Active: 498044 kB' 'Inactive: 2749832 kB' 'Active(anon): 128876 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120016 kB' 'Mapped: 51016 kB' 'Shmem: 10488 kB' 'KReclaimable: 88156 kB' 'Slab: 191412 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103256 kB' 'KernelStack: 6840 kB' 'PageTables: 4564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.117 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.117 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.118 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.118 07:56:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.392 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.392 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.392 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.392 07:56:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.392 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.392 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.392 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.392 07:56:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.392 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.392 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.392 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.392 07:56:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.392 07:56:28 -- setup/common.sh@33 -- # echo 0 00:05:17.392 07:56:28 -- setup/common.sh@33 -- # return 0 00:05:17.392 07:56:28 -- setup/hugepages.sh@97 -- # anon=0 00:05:17.392 07:56:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:17.392 07:56:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.392 07:56:28 -- setup/common.sh@18 -- # local node= 00:05:17.392 07:56:28 -- setup/common.sh@19 -- # local var val 00:05:17.392 07:56:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.392 07:56:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.392 07:56:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.392 07:56:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.392 07:56:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.393 07:56:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6532176 kB' 'MemAvailable: 9458308 kB' 'Buffers: 3704 kB' 'Cached: 3125784 kB' 'SwapCached: 0 kB' 'Active: 497836 kB' 'Inactive: 2749832 kB' 'Active(anon): 128668 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119776 kB' 'Mapped: 50916 kB' 'Shmem: 10488 kB' 'KReclaimable: 88156 kB' 'Slab: 191412 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103256 kB' 'KernelStack: 6848 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.393 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.393 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.394 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.394 07:56:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.394 07:56:28 -- setup/common.sh@33 -- # echo 0 00:05:17.394 07:56:28 -- setup/common.sh@33 -- # return 0 00:05:17.394 07:56:28 -- setup/hugepages.sh@99 -- # surp=0 00:05:17.394 07:56:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:17.394 07:56:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:17.394 07:56:28 -- setup/common.sh@18 -- # local node= 00:05:17.395 07:56:28 -- setup/common.sh@19 -- # local var val 00:05:17.395 07:56:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.395 07:56:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.395 07:56:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.395 07:56:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.395 07:56:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.395 07:56:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6532176 kB' 'MemAvailable: 9458308 kB' 'Buffers: 3704 kB' 'Cached: 3125784 kB' 'SwapCached: 0 kB' 'Active: 497840 kB' 'Inactive: 2749832 kB' 'Active(anon): 128672 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119760 kB' 'Mapped: 50916 kB' 'Shmem: 10488 kB' 'KReclaimable: 88156 kB' 'Slab: 191408 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103252 kB' 'KernelStack: 6832 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.395 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.395 07:56:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.396 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.396 07:56:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.396 07:56:28 -- setup/common.sh@33 -- # echo 0 00:05:17.397 07:56:28 -- setup/common.sh@33 -- # return 0 00:05:17.397 07:56:28 -- setup/hugepages.sh@100 -- # resv=0 00:05:17.397 07:56:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:17.397 nr_hugepages=1024 00:05:17.397 07:56:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:17.397 resv_hugepages=0 00:05:17.397 07:56:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:17.397 surplus_hugepages=0 00:05:17.397 07:56:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:17.397 anon_hugepages=0 00:05:17.397 07:56:28 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:17.397 07:56:28 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:17.397 07:56:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:17.397 07:56:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:17.397 07:56:28 -- setup/common.sh@18 -- # local node= 00:05:17.397 07:56:28 -- setup/common.sh@19 -- # local var val 00:05:17.397 07:56:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.397 07:56:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.397 07:56:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.397 07:56:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.397 07:56:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.397 07:56:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.397 07:56:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6532176 kB' 'MemAvailable: 9458308 kB' 'Buffers: 3704 kB' 'Cached: 3125784 kB' 'SwapCached: 0 kB' 'Active: 497800 kB' 'Inactive: 2749832 kB' 'Active(anon): 128632 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119756 kB' 'Mapped: 50916 kB' 'Shmem: 10488 kB' 'KReclaimable: 88156 kB' 'Slab: 191408 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103252 kB' 'KernelStack: 6848 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.397 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.397 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.398 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.398 07:56:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.398 07:56:28 -- setup/common.sh@33 -- # echo 1024 00:05:17.398 07:56:28 -- setup/common.sh@33 -- # return 0 00:05:17.399 07:56:28 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:17.399 07:56:28 -- setup/hugepages.sh@112 -- # get_nodes 00:05:17.399 07:56:28 -- setup/hugepages.sh@27 -- # local node 00:05:17.399 07:56:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.399 07:56:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:17.399 07:56:28 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:17.399 07:56:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.399 07:56:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.399 07:56:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.399 07:56:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:17.399 07:56:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.399 07:56:28 -- setup/common.sh@18 -- # local node=0 00:05:17.399 07:56:28 -- setup/common.sh@19 -- # local var val 00:05:17.399 07:56:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.399 07:56:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.399 07:56:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:17.399 07:56:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:17.399 07:56:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.399 07:56:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.399 07:56:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6532176 kB' 'MemUsed: 5706936 kB' 'SwapCached: 0 kB' 'Active: 497800 kB' 'Inactive: 2749832 kB' 'Active(anon): 128632 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 3129488 kB' 'Mapped: 50916 kB' 'AnonPages: 119772 kB' 'Shmem: 10488 kB' 'KernelStack: 6848 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88156 kB' 'Slab: 191408 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.399 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.399 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.400 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.400 07:56:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.400 07:56:28 -- setup/common.sh@33 -- # echo 0 00:05:17.400 07:56:28 -- setup/common.sh@33 -- # return 0 00:05:17.400 07:56:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:17.400 07:56:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:17.400 07:56:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:17.400 07:56:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:17.400 node0=1024 expecting 1024 00:05:17.400 07:56:28 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:17.400 07:56:28 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:17.400 00:05:17.400 real 0m0.583s 00:05:17.400 user 0m0.305s 00:05:17.400 sys 0m0.278s 00:05:17.400 07:56:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.400 07:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:17.400 ************************************ 00:05:17.400 END TEST even_2G_alloc 00:05:17.400 ************************************ 00:05:17.400 07:56:28 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:17.400 07:56:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.400 07:56:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.400 07:56:28 -- common/autotest_common.sh@10 -- # set +x 00:05:17.400 ************************************ 00:05:17.400 START TEST odd_alloc 00:05:17.400 ************************************ 00:05:17.400 07:56:28 -- common/autotest_common.sh@1114 -- # odd_alloc 00:05:17.400 07:56:28 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:17.400 07:56:28 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:17.400 07:56:28 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:17.400 07:56:28 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:17.400 07:56:28 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:17.400 07:56:28 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:17.400 07:56:28 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:17.400 07:56:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.400 07:56:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:17.400 07:56:28 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:17.400 07:56:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.400 07:56:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.400 07:56:28 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:17.400 07:56:28 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:17.400 07:56:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:17.401 07:56:28 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:17.401 07:56:28 -- setup/hugepages.sh@83 -- # : 0 00:05:17.401 07:56:28 -- setup/hugepages.sh@84 -- # : 0 00:05:17.401 07:56:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:17.401 07:56:28 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:17.401 07:56:28 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:17.401 07:56:28 -- setup/hugepages.sh@160 -- # setup output 00:05:17.401 07:56:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.401 07:56:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:17.664 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.924 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.924 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.924 07:56:28 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:17.924 07:56:28 -- setup/hugepages.sh@89 -- # local node 00:05:17.924 07:56:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:17.924 07:56:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:17.924 07:56:28 -- setup/hugepages.sh@92 -- # local surp 00:05:17.924 07:56:28 -- setup/hugepages.sh@93 -- # local resv 00:05:17.924 07:56:28 -- setup/hugepages.sh@94 -- # local anon 00:05:17.924 07:56:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:17.924 07:56:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:17.924 07:56:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:17.924 07:56:28 -- setup/common.sh@18 -- # local node= 00:05:17.924 07:56:28 -- setup/common.sh@19 -- # local var val 00:05:17.924 07:56:28 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.924 07:56:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.924 07:56:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.924 07:56:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.924 07:56:28 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.924 07:56:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.924 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.924 07:56:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6535808 kB' 'MemAvailable: 9461940 kB' 'Buffers: 3704 kB' 'Cached: 3125784 kB' 'SwapCached: 0 kB' 'Active: 497604 kB' 'Inactive: 2749832 kB' 'Active(anon): 128436 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119780 kB' 'Mapped: 51028 kB' 'Shmem: 10488 kB' 'KReclaimable: 88156 kB' 'Slab: 191376 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103220 kB' 'KernelStack: 6824 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:17.924 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.924 07:56:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.924 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.924 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.924 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:28 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:28 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.925 07:56:29 -- setup/common.sh@33 -- # echo 0 00:05:17.925 07:56:29 -- setup/common.sh@33 -- # return 0 00:05:17.925 07:56:29 -- setup/hugepages.sh@97 -- # anon=0 00:05:17.925 07:56:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:17.925 07:56:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.925 07:56:29 -- setup/common.sh@18 -- # local node= 00:05:17.925 07:56:29 -- setup/common.sh@19 -- # local var val 00:05:17.925 07:56:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.925 07:56:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.925 07:56:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.925 07:56:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.925 07:56:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.925 07:56:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6536312 kB' 'MemAvailable: 9462444 kB' 'Buffers: 3704 kB' 'Cached: 3125784 kB' 'SwapCached: 0 kB' 'Active: 497844 kB' 'Inactive: 2749832 kB' 'Active(anon): 128676 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119760 kB' 'Mapped: 51028 kB' 'Shmem: 10488 kB' 'KReclaimable: 88156 kB' 'Slab: 191376 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103220 kB' 'KernelStack: 6808 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.925 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.925 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.926 07:56:29 -- setup/common.sh@33 -- # echo 0 00:05:17.926 07:56:29 -- setup/common.sh@33 -- # return 0 00:05:17.926 07:56:29 -- setup/hugepages.sh@99 -- # surp=0 00:05:17.926 07:56:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:17.926 07:56:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:17.926 07:56:29 -- setup/common.sh@18 -- # local node= 00:05:17.926 07:56:29 -- setup/common.sh@19 -- # local var val 00:05:17.926 07:56:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.926 07:56:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.926 07:56:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.926 07:56:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.926 07:56:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.926 07:56:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6536060 kB' 'MemAvailable: 9462192 kB' 'Buffers: 3704 kB' 'Cached: 3125784 kB' 'SwapCached: 0 kB' 'Active: 497548 kB' 'Inactive: 2749832 kB' 'Active(anon): 128380 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119504 kB' 'Mapped: 50916 kB' 'Shmem: 10488 kB' 'KReclaimable: 88156 kB' 'Slab: 191380 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103224 kB' 'KernelStack: 6848 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.926 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.926 07:56:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.927 07:56:29 -- setup/common.sh@33 -- # echo 0 00:05:17.927 07:56:29 -- setup/common.sh@33 -- # return 0 00:05:17.927 07:56:29 -- setup/hugepages.sh@100 -- # resv=0 00:05:17.927 nr_hugepages=1025 00:05:17.927 07:56:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:17.927 resv_hugepages=0 00:05:17.927 07:56:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:17.927 surplus_hugepages=0 00:05:17.927 07:56:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:17.927 07:56:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:17.927 anon_hugepages=0 00:05:17.927 07:56:29 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:17.927 07:56:29 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:17.927 07:56:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:17.927 07:56:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:17.927 07:56:29 -- setup/common.sh@18 -- # local node= 00:05:17.927 07:56:29 -- setup/common.sh@19 -- # local var val 00:05:17.927 07:56:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.927 07:56:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.927 07:56:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.927 07:56:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.927 07:56:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.927 07:56:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6535560 kB' 'MemAvailable: 9461692 kB' 'Buffers: 3704 kB' 'Cached: 3125784 kB' 'SwapCached: 0 kB' 'Active: 497808 kB' 'Inactive: 2749832 kB' 'Active(anon): 128640 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119764 kB' 'Mapped: 50916 kB' 'Shmem: 10488 kB' 'KReclaimable: 88156 kB' 'Slab: 191380 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103224 kB' 'KernelStack: 6848 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.927 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.927 07:56:29 -- setup/common.sh@33 -- # echo 1025 00:05:17.927 07:56:29 -- setup/common.sh@33 -- # return 0 00:05:17.927 07:56:29 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:17.927 07:56:29 -- setup/hugepages.sh@112 -- # get_nodes 00:05:17.927 07:56:29 -- setup/hugepages.sh@27 -- # local node 00:05:17.927 07:56:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.927 07:56:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:17.927 07:56:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:17.927 07:56:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.927 07:56:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.927 07:56:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.927 07:56:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:17.927 07:56:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.927 07:56:29 -- setup/common.sh@18 -- # local node=0 00:05:17.927 07:56:29 -- setup/common.sh@19 -- # local var val 00:05:17.927 07:56:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.927 07:56:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.927 07:56:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:17.927 07:56:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:17.927 07:56:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.927 07:56:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.927 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6535560 kB' 'MemUsed: 5703552 kB' 'SwapCached: 0 kB' 'Active: 497508 kB' 'Inactive: 2749832 kB' 'Active(anon): 128340 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3129488 kB' 'Mapped: 50916 kB' 'AnonPages: 119728 kB' 'Shmem: 10488 kB' 'KernelStack: 6832 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88156 kB' 'Slab: 191380 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # continue 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.928 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.928 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.928 07:56:29 -- setup/common.sh@33 -- # echo 0 00:05:17.928 07:56:29 -- setup/common.sh@33 -- # return 0 00:05:17.928 07:56:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:17.928 07:56:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:17.928 07:56:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:17.928 07:56:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:17.928 node0=1025 expecting 1025 00:05:17.928 07:56:29 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:17.928 07:56:29 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:17.928 00:05:17.928 real 0m0.519s 00:05:17.928 user 0m0.264s 00:05:17.928 sys 0m0.290s 00:05:17.928 07:56:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:17.928 07:56:29 -- common/autotest_common.sh@10 -- # set +x 00:05:17.928 ************************************ 00:05:17.928 END TEST odd_alloc 00:05:17.928 ************************************ 00:05:17.928 07:56:29 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:17.928 07:56:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.928 07:56:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.928 07:56:29 -- common/autotest_common.sh@10 -- # set +x 00:05:17.928 ************************************ 00:05:17.928 START TEST custom_alloc 00:05:17.928 ************************************ 00:05:17.928 07:56:29 -- common/autotest_common.sh@1114 -- # custom_alloc 00:05:17.928 07:56:29 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:17.928 07:56:29 -- setup/hugepages.sh@169 -- # local node 00:05:17.928 07:56:29 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:17.928 07:56:29 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:17.928 07:56:29 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:17.928 07:56:29 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:17.928 07:56:29 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:17.928 07:56:29 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:17.928 07:56:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:17.928 07:56:29 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:17.928 07:56:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:17.928 07:56:29 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:17.928 07:56:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.928 07:56:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:17.928 07:56:29 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:17.928 07:56:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.928 07:56:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.928 07:56:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:17.928 07:56:29 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:17.928 07:56:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:17.928 07:56:29 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:17.928 07:56:29 -- setup/hugepages.sh@83 -- # : 0 00:05:17.928 07:56:29 -- setup/hugepages.sh@84 -- # : 0 00:05:17.928 07:56:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:17.928 07:56:29 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:17.928 07:56:29 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:17.928 07:56:29 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:17.928 07:56:29 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:17.928 07:56:29 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:17.928 07:56:29 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:17.928 07:56:29 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:17.928 07:56:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.928 07:56:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:17.928 07:56:29 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:17.928 07:56:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.928 07:56:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.928 07:56:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:17.928 07:56:29 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:17.928 07:56:29 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:17.928 07:56:29 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:17.928 07:56:29 -- setup/hugepages.sh@78 -- # return 0 00:05:17.928 07:56:29 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:17.928 07:56:29 -- setup/hugepages.sh@187 -- # setup output 00:05:17.928 07:56:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.928 07:56:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:18.493 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.493 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.493 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.493 07:56:29 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:18.493 07:56:29 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:18.493 07:56:29 -- setup/hugepages.sh@89 -- # local node 00:05:18.493 07:56:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:18.493 07:56:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:18.493 07:56:29 -- setup/hugepages.sh@92 -- # local surp 00:05:18.493 07:56:29 -- setup/hugepages.sh@93 -- # local resv 00:05:18.493 07:56:29 -- setup/hugepages.sh@94 -- # local anon 00:05:18.493 07:56:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:18.493 07:56:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:18.493 07:56:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:18.493 07:56:29 -- setup/common.sh@18 -- # local node= 00:05:18.493 07:56:29 -- setup/common.sh@19 -- # local var val 00:05:18.493 07:56:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.493 07:56:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.493 07:56:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.493 07:56:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.493 07:56:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.493 07:56:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7584556 kB' 'MemAvailable: 10510688 kB' 'Buffers: 3704 kB' 'Cached: 3125784 kB' 'SwapCached: 0 kB' 'Active: 497840 kB' 'Inactive: 2749832 kB' 'Active(anon): 128672 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119832 kB' 'Mapped: 50988 kB' 'Shmem: 10488 kB' 'KReclaimable: 88156 kB' 'Slab: 191360 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103204 kB' 'KernelStack: 6872 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.493 07:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.493 07:56:29 -- setup/common.sh@33 -- # echo 0 00:05:18.493 07:56:29 -- setup/common.sh@33 -- # return 0 00:05:18.493 07:56:29 -- setup/hugepages.sh@97 -- # anon=0 00:05:18.493 07:56:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:18.493 07:56:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.493 07:56:29 -- setup/common.sh@18 -- # local node= 00:05:18.493 07:56:29 -- setup/common.sh@19 -- # local var val 00:05:18.493 07:56:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.493 07:56:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.493 07:56:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.493 07:56:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.493 07:56:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.493 07:56:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.493 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7584556 kB' 'MemAvailable: 10510688 kB' 'Buffers: 3704 kB' 'Cached: 3125784 kB' 'SwapCached: 0 kB' 'Active: 498024 kB' 'Inactive: 2749832 kB' 'Active(anon): 128856 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119960 kB' 'Mapped: 51044 kB' 'Shmem: 10488 kB' 'KReclaimable: 88156 kB' 'Slab: 191360 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103204 kB' 'KernelStack: 6824 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.494 07:56:29 -- setup/common.sh@33 -- # echo 0 00:05:18.494 07:56:29 -- setup/common.sh@33 -- # return 0 00:05:18.494 07:56:29 -- setup/hugepages.sh@99 -- # surp=0 00:05:18.494 07:56:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:18.494 07:56:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:18.494 07:56:29 -- setup/common.sh@18 -- # local node= 00:05:18.494 07:56:29 -- setup/common.sh@19 -- # local var val 00:05:18.494 07:56:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.494 07:56:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.494 07:56:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.494 07:56:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.494 07:56:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.494 07:56:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7584556 kB' 'MemAvailable: 10510688 kB' 'Buffers: 3704 kB' 'Cached: 3125784 kB' 'SwapCached: 0 kB' 'Active: 498016 kB' 'Inactive: 2749832 kB' 'Active(anon): 128848 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119704 kB' 'Mapped: 51044 kB' 'Shmem: 10488 kB' 'KReclaimable: 88156 kB' 'Slab: 191360 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103204 kB' 'KernelStack: 6808 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.494 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.494 07:56:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.495 07:56:29 -- setup/common.sh@33 -- # echo 0 00:05:18.495 07:56:29 -- setup/common.sh@33 -- # return 0 00:05:18.495 07:56:29 -- setup/hugepages.sh@100 -- # resv=0 00:05:18.495 nr_hugepages=512 00:05:18.495 07:56:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:18.495 resv_hugepages=0 00:05:18.495 07:56:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:18.495 surplus_hugepages=0 00:05:18.495 07:56:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:18.495 anon_hugepages=0 00:05:18.495 07:56:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:18.495 07:56:29 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:18.495 07:56:29 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:18.495 07:56:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:18.495 07:56:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:18.495 07:56:29 -- setup/common.sh@18 -- # local node= 00:05:18.495 07:56:29 -- setup/common.sh@19 -- # local var val 00:05:18.495 07:56:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.495 07:56:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.495 07:56:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.495 07:56:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.495 07:56:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.495 07:56:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7584556 kB' 'MemAvailable: 10510688 kB' 'Buffers: 3704 kB' 'Cached: 3125784 kB' 'SwapCached: 0 kB' 'Active: 497772 kB' 'Inactive: 2749832 kB' 'Active(anon): 128604 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119684 kB' 'Mapped: 50916 kB' 'Shmem: 10488 kB' 'KReclaimable: 88156 kB' 'Slab: 191356 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103200 kB' 'KernelStack: 6832 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.495 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.495 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.496 07:56:29 -- setup/common.sh@33 -- # echo 512 00:05:18.496 07:56:29 -- setup/common.sh@33 -- # return 0 00:05:18.496 07:56:29 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:18.496 07:56:29 -- setup/hugepages.sh@112 -- # get_nodes 00:05:18.496 07:56:29 -- setup/hugepages.sh@27 -- # local node 00:05:18.496 07:56:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.496 07:56:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:18.496 07:56:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:18.496 07:56:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:18.496 07:56:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:18.496 07:56:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:18.496 07:56:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:18.496 07:56:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.496 07:56:29 -- setup/common.sh@18 -- # local node=0 00:05:18.496 07:56:29 -- setup/common.sh@19 -- # local var val 00:05:18.496 07:56:29 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.496 07:56:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.496 07:56:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:18.496 07:56:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:18.496 07:56:29 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.496 07:56:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 7584556 kB' 'MemUsed: 4654556 kB' 'SwapCached: 0 kB' 'Active: 497876 kB' 'Inactive: 2749832 kB' 'Active(anon): 128708 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3129488 kB' 'Mapped: 50916 kB' 'AnonPages: 119796 kB' 'Shmem: 10488 kB' 'KernelStack: 6832 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88156 kB' 'Slab: 191356 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103200 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # continue 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.496 07:56:29 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.496 07:56:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.496 07:56:29 -- setup/common.sh@33 -- # echo 0 00:05:18.496 07:56:29 -- setup/common.sh@33 -- # return 0 00:05:18.496 07:56:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:18.496 07:56:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:18.496 07:56:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:18.496 07:56:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:18.496 node0=512 expecting 512 00:05:18.496 07:56:29 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:18.496 07:56:29 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:18.496 00:05:18.496 real 0m0.531s 00:05:18.496 user 0m0.274s 00:05:18.496 sys 0m0.290s 00:05:18.496 07:56:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.496 07:56:29 -- common/autotest_common.sh@10 -- # set +x 00:05:18.496 ************************************ 00:05:18.496 END TEST custom_alloc 00:05:18.496 ************************************ 00:05:18.496 07:56:29 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:18.496 07:56:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.496 07:56:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.496 07:56:29 -- common/autotest_common.sh@10 -- # set +x 00:05:18.496 ************************************ 00:05:18.496 START TEST no_shrink_alloc 00:05:18.496 ************************************ 00:05:18.496 07:56:29 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:05:18.496 07:56:29 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:18.496 07:56:29 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:18.496 07:56:29 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:18.496 07:56:29 -- setup/hugepages.sh@51 -- # shift 00:05:18.496 07:56:29 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:18.496 07:56:29 -- setup/hugepages.sh@52 -- # local node_ids 00:05:18.496 07:56:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:18.496 07:56:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:18.496 07:56:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:18.496 07:56:29 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:18.496 07:56:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:18.496 07:56:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:18.496 07:56:29 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:18.496 07:56:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:18.496 07:56:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:18.496 07:56:29 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:18.496 07:56:29 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:18.496 07:56:29 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:18.497 07:56:29 -- setup/hugepages.sh@73 -- # return 0 00:05:18.497 07:56:29 -- setup/hugepages.sh@198 -- # setup output 00:05:18.497 07:56:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.497 07:56:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.066 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.066 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.066 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.066 07:56:30 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:19.066 07:56:30 -- setup/hugepages.sh@89 -- # local node 00:05:19.066 07:56:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:19.066 07:56:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:19.066 07:56:30 -- setup/hugepages.sh@92 -- # local surp 00:05:19.066 07:56:30 -- setup/hugepages.sh@93 -- # local resv 00:05:19.066 07:56:30 -- setup/hugepages.sh@94 -- # local anon 00:05:19.066 07:56:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:19.066 07:56:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:19.066 07:56:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:19.066 07:56:30 -- setup/common.sh@18 -- # local node= 00:05:19.066 07:56:30 -- setup/common.sh@19 -- # local var val 00:05:19.066 07:56:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.066 07:56:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.066 07:56:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.066 07:56:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.066 07:56:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.066 07:56:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 07:56:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6538756 kB' 'MemAvailable: 9464888 kB' 'Buffers: 3704 kB' 'Cached: 3125784 kB' 'SwapCached: 0 kB' 'Active: 498224 kB' 'Inactive: 2749832 kB' 'Active(anon): 129056 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119972 kB' 'Mapped: 51064 kB' 'Shmem: 10488 kB' 'KReclaimable: 88156 kB' 'Slab: 191376 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103220 kB' 'KernelStack: 6824 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.066 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.066 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.067 07:56:30 -- setup/common.sh@33 -- # echo 0 00:05:19.067 07:56:30 -- setup/common.sh@33 -- # return 0 00:05:19.067 07:56:30 -- setup/hugepages.sh@97 -- # anon=0 00:05:19.067 07:56:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:19.067 07:56:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.067 07:56:30 -- setup/common.sh@18 -- # local node= 00:05:19.067 07:56:30 -- setup/common.sh@19 -- # local var val 00:05:19.067 07:56:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.067 07:56:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.067 07:56:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.067 07:56:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.067 07:56:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.067 07:56:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6538756 kB' 'MemAvailable: 9464888 kB' 'Buffers: 3704 kB' 'Cached: 3125788 kB' 'SwapCached: 0 kB' 'Active: 497880 kB' 'Inactive: 2749832 kB' 'Active(anon): 128712 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119664 kB' 'Mapped: 51048 kB' 'Shmem: 10488 kB' 'KReclaimable: 88156 kB' 'Slab: 191376 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103220 kB' 'KernelStack: 6792 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.067 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.067 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.068 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.068 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.069 07:56:30 -- setup/common.sh@33 -- # echo 0 00:05:19.069 07:56:30 -- setup/common.sh@33 -- # return 0 00:05:19.069 07:56:30 -- setup/hugepages.sh@99 -- # surp=0 00:05:19.069 07:56:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:19.069 07:56:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:19.069 07:56:30 -- setup/common.sh@18 -- # local node= 00:05:19.069 07:56:30 -- setup/common.sh@19 -- # local var val 00:05:19.069 07:56:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.069 07:56:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.069 07:56:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.069 07:56:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.069 07:56:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.069 07:56:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6539176 kB' 'MemAvailable: 9465308 kB' 'Buffers: 3704 kB' 'Cached: 3125788 kB' 'SwapCached: 0 kB' 'Active: 497768 kB' 'Inactive: 2749832 kB' 'Active(anon): 128600 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119500 kB' 'Mapped: 51064 kB' 'Shmem: 10488 kB' 'KReclaimable: 88156 kB' 'Slab: 191372 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103216 kB' 'KernelStack: 6808 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 321772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.069 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.069 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.070 07:56:30 -- setup/common.sh@33 -- # echo 0 00:05:19.070 07:56:30 -- setup/common.sh@33 -- # return 0 00:05:19.070 07:56:30 -- setup/hugepages.sh@100 -- # resv=0 00:05:19.070 07:56:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:19.070 nr_hugepages=1024 00:05:19.070 resv_hugepages=0 00:05:19.070 07:56:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:19.070 surplus_hugepages=0 00:05:19.070 07:56:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:19.070 anon_hugepages=0 00:05:19.070 07:56:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:19.070 07:56:30 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.070 07:56:30 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:19.070 07:56:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:19.070 07:56:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:19.070 07:56:30 -- setup/common.sh@18 -- # local node= 00:05:19.070 07:56:30 -- setup/common.sh@19 -- # local var val 00:05:19.070 07:56:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.070 07:56:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.070 07:56:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.070 07:56:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.070 07:56:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.070 07:56:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6539432 kB' 'MemAvailable: 9465564 kB' 'Buffers: 3704 kB' 'Cached: 3125788 kB' 'SwapCached: 0 kB' 'Active: 497816 kB' 'Inactive: 2749832 kB' 'Active(anon): 128648 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119832 kB' 'Mapped: 51064 kB' 'Shmem: 10488 kB' 'KReclaimable: 88156 kB' 'Slab: 191372 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103216 kB' 'KernelStack: 6792 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.070 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.070 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.071 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.071 07:56:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.072 07:56:30 -- setup/common.sh@33 -- # echo 1024 00:05:19.072 07:56:30 -- setup/common.sh@33 -- # return 0 00:05:19.072 07:56:30 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.072 07:56:30 -- setup/hugepages.sh@112 -- # get_nodes 00:05:19.072 07:56:30 -- setup/hugepages.sh@27 -- # local node 00:05:19.072 07:56:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.072 07:56:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:19.072 07:56:30 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:19.072 07:56:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.072 07:56:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.072 07:56:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.072 07:56:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:19.072 07:56:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.072 07:56:30 -- setup/common.sh@18 -- # local node=0 00:05:19.072 07:56:30 -- setup/common.sh@19 -- # local var val 00:05:19.072 07:56:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.072 07:56:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.072 07:56:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:19.072 07:56:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:19.072 07:56:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.072 07:56:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6539452 kB' 'MemUsed: 5699660 kB' 'SwapCached: 0 kB' 'Active: 497768 kB' 'Inactive: 2749832 kB' 'Active(anon): 128600 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749832 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3129492 kB' 'Mapped: 50916 kB' 'AnonPages: 119756 kB' 'Shmem: 10488 kB' 'KernelStack: 6832 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88156 kB' 'Slab: 191372 kB' 'SReclaimable: 88156 kB' 'SUnreclaim: 103216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.072 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.072 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.073 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.073 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.073 07:56:30 -- setup/common.sh@33 -- # echo 0 00:05:19.073 07:56:30 -- setup/common.sh@33 -- # return 0 00:05:19.073 07:56:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:19.073 07:56:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:19.073 07:56:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:19.073 07:56:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:19.073 07:56:30 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:19.073 node0=1024 expecting 1024 00:05:19.073 07:56:30 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:19.073 07:56:30 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:19.073 07:56:30 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:19.073 07:56:30 -- setup/hugepages.sh@202 -- # setup output 00:05:19.073 07:56:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.073 07:56:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.639 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.639 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.639 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.639 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:19.640 07:56:30 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:19.640 07:56:30 -- setup/hugepages.sh@89 -- # local node 00:05:19.640 07:56:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:19.640 07:56:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:19.640 07:56:30 -- setup/hugepages.sh@92 -- # local surp 00:05:19.640 07:56:30 -- setup/hugepages.sh@93 -- # local resv 00:05:19.640 07:56:30 -- setup/hugepages.sh@94 -- # local anon 00:05:19.640 07:56:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:19.640 07:56:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:19.640 07:56:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:19.640 07:56:30 -- setup/common.sh@18 -- # local node= 00:05:19.640 07:56:30 -- setup/common.sh@19 -- # local var val 00:05:19.640 07:56:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.640 07:56:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.640 07:56:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.640 07:56:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.640 07:56:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.640 07:56:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6537372 kB' 'MemAvailable: 9463492 kB' 'Buffers: 3704 kB' 'Cached: 3125788 kB' 'SwapCached: 0 kB' 'Active: 494976 kB' 'Inactive: 2749836 kB' 'Active(anon): 125808 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117152 kB' 'Mapped: 50316 kB' 'Shmem: 10488 kB' 'KReclaimable: 88128 kB' 'Slab: 191132 kB' 'SReclaimable: 88128 kB' 'SUnreclaim: 103004 kB' 'KernelStack: 6760 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55512 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.640 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.640 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.641 07:56:30 -- setup/common.sh@33 -- # echo 0 00:05:19.641 07:56:30 -- setup/common.sh@33 -- # return 0 00:05:19.641 07:56:30 -- setup/hugepages.sh@97 -- # anon=0 00:05:19.641 07:56:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:19.641 07:56:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.641 07:56:30 -- setup/common.sh@18 -- # local node= 00:05:19.641 07:56:30 -- setup/common.sh@19 -- # local var val 00:05:19.641 07:56:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.641 07:56:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.641 07:56:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.641 07:56:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.641 07:56:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.641 07:56:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6537120 kB' 'MemAvailable: 9463240 kB' 'Buffers: 3704 kB' 'Cached: 3125788 kB' 'SwapCached: 0 kB' 'Active: 494660 kB' 'Inactive: 2749836 kB' 'Active(anon): 125492 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116896 kB' 'Mapped: 50088 kB' 'Shmem: 10488 kB' 'KReclaimable: 88128 kB' 'Slab: 191136 kB' 'SReclaimable: 88128 kB' 'SUnreclaim: 103008 kB' 'KernelStack: 6736 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.641 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.641 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.642 07:56:30 -- setup/common.sh@33 -- # echo 0 00:05:19.642 07:56:30 -- setup/common.sh@33 -- # return 0 00:05:19.642 07:56:30 -- setup/hugepages.sh@99 -- # surp=0 00:05:19.642 07:56:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:19.642 07:56:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:19.642 07:56:30 -- setup/common.sh@18 -- # local node= 00:05:19.642 07:56:30 -- setup/common.sh@19 -- # local var val 00:05:19.642 07:56:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.642 07:56:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.642 07:56:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.642 07:56:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.642 07:56:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.642 07:56:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6537120 kB' 'MemAvailable: 9463240 kB' 'Buffers: 3704 kB' 'Cached: 3125788 kB' 'SwapCached: 0 kB' 'Active: 495012 kB' 'Inactive: 2749836 kB' 'Active(anon): 125844 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116944 kB' 'Mapped: 50140 kB' 'Shmem: 10488 kB' 'KReclaimable: 88128 kB' 'Slab: 191136 kB' 'SReclaimable: 88128 kB' 'SUnreclaim: 103008 kB' 'KernelStack: 6736 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.642 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.642 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.643 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.643 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.644 07:56:30 -- setup/common.sh@33 -- # echo 0 00:05:19.644 07:56:30 -- setup/common.sh@33 -- # return 0 00:05:19.644 07:56:30 -- setup/hugepages.sh@100 -- # resv=0 00:05:19.644 07:56:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:19.644 nr_hugepages=1024 00:05:19.644 07:56:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:19.644 resv_hugepages=0 00:05:19.644 07:56:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:19.644 surplus_hugepages=0 00:05:19.644 anon_hugepages=0 00:05:19.644 07:56:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:19.644 07:56:30 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.644 07:56:30 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:19.644 07:56:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:19.644 07:56:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:19.644 07:56:30 -- setup/common.sh@18 -- # local node= 00:05:19.644 07:56:30 -- setup/common.sh@19 -- # local var val 00:05:19.644 07:56:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.644 07:56:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.644 07:56:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.644 07:56:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.644 07:56:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.644 07:56:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6537120 kB' 'MemAvailable: 9463240 kB' 'Buffers: 3704 kB' 'Cached: 3125788 kB' 'SwapCached: 0 kB' 'Active: 494900 kB' 'Inactive: 2749836 kB' 'Active(anon): 125732 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116880 kB' 'Mapped: 50088 kB' 'Shmem: 10488 kB' 'KReclaimable: 88128 kB' 'Slab: 191136 kB' 'SReclaimable: 88128 kB' 'SUnreclaim: 103008 kB' 'KernelStack: 6736 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 303756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 190316 kB' 'DirectMap2M: 6100992 kB' 'DirectMap1G: 8388608 kB' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.644 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.644 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.645 07:56:30 -- setup/common.sh@33 -- # echo 1024 00:05:19.645 07:56:30 -- setup/common.sh@33 -- # return 0 00:05:19.645 07:56:30 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.645 07:56:30 -- setup/hugepages.sh@112 -- # get_nodes 00:05:19.645 07:56:30 -- setup/hugepages.sh@27 -- # local node 00:05:19.645 07:56:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.645 07:56:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:19.645 07:56:30 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:19.645 07:56:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.645 07:56:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.645 07:56:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.645 07:56:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:19.645 07:56:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.645 07:56:30 -- setup/common.sh@18 -- # local node=0 00:05:19.645 07:56:30 -- setup/common.sh@19 -- # local var val 00:05:19.645 07:56:30 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.645 07:56:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.645 07:56:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:19.645 07:56:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:19.645 07:56:30 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.645 07:56:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239112 kB' 'MemFree: 6537120 kB' 'MemUsed: 5701992 kB' 'SwapCached: 0 kB' 'Active: 494780 kB' 'Inactive: 2749836 kB' 'Active(anon): 125612 kB' 'Inactive(anon): 0 kB' 'Active(file): 369168 kB' 'Inactive(file): 2749836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3129492 kB' 'Mapped: 50088 kB' 'AnonPages: 116776 kB' 'Shmem: 10488 kB' 'KernelStack: 6720 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88128 kB' 'Slab: 191136 kB' 'SReclaimable: 88128 kB' 'SUnreclaim: 103008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.645 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.645 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # continue 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.646 07:56:30 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.646 07:56:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.646 07:56:30 -- setup/common.sh@33 -- # echo 0 00:05:19.646 07:56:30 -- setup/common.sh@33 -- # return 0 00:05:19.646 07:56:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:19.646 07:56:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:19.646 node0=1024 expecting 1024 00:05:19.646 ************************************ 00:05:19.646 END TEST no_shrink_alloc 00:05:19.646 ************************************ 00:05:19.646 07:56:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:19.646 07:56:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:19.646 07:56:30 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:19.646 07:56:30 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:19.646 00:05:19.646 real 0m1.134s 00:05:19.646 user 0m0.540s 00:05:19.646 sys 0m0.614s 00:05:19.646 07:56:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.646 07:56:30 -- common/autotest_common.sh@10 -- # set +x 00:05:19.904 07:56:30 -- setup/hugepages.sh@217 -- # clear_hp 00:05:19.904 07:56:30 -- setup/hugepages.sh@37 -- # local node hp 00:05:19.904 07:56:30 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:19.904 07:56:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:19.904 07:56:30 -- setup/hugepages.sh@41 -- # echo 0 00:05:19.904 07:56:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:19.904 07:56:30 -- setup/hugepages.sh@41 -- # echo 0 00:05:19.904 07:56:30 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:19.904 07:56:30 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:19.904 00:05:19.904 real 0m4.915s 00:05:19.904 user 0m2.409s 00:05:19.904 sys 0m2.483s 00:05:19.904 07:56:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.904 07:56:30 -- common/autotest_common.sh@10 -- # set +x 00:05:19.904 ************************************ 00:05:19.904 END TEST hugepages 00:05:19.904 ************************************ 00:05:19.904 07:56:30 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:19.904 07:56:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.904 07:56:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.904 07:56:30 -- common/autotest_common.sh@10 -- # set +x 00:05:19.904 ************************************ 00:05:19.904 START TEST driver 00:05:19.904 ************************************ 00:05:19.904 07:56:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:19.904 * Looking for test storage... 00:05:19.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:19.904 07:56:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:19.904 07:56:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:19.904 07:56:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:19.904 07:56:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:19.904 07:56:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:19.904 07:56:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:19.904 07:56:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:19.904 07:56:31 -- scripts/common.sh@335 -- # IFS=.-: 00:05:19.904 07:56:31 -- scripts/common.sh@335 -- # read -ra ver1 00:05:19.904 07:56:31 -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.904 07:56:31 -- scripts/common.sh@336 -- # read -ra ver2 00:05:19.904 07:56:31 -- scripts/common.sh@337 -- # local 'op=<' 00:05:19.904 07:56:31 -- scripts/common.sh@339 -- # ver1_l=2 00:05:19.904 07:56:31 -- scripts/common.sh@340 -- # ver2_l=1 00:05:19.904 07:56:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:19.904 07:56:31 -- scripts/common.sh@343 -- # case "$op" in 00:05:19.904 07:56:31 -- scripts/common.sh@344 -- # : 1 00:05:19.904 07:56:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:19.904 07:56:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.904 07:56:31 -- scripts/common.sh@364 -- # decimal 1 00:05:19.904 07:56:31 -- scripts/common.sh@352 -- # local d=1 00:05:19.904 07:56:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.904 07:56:31 -- scripts/common.sh@354 -- # echo 1 00:05:20.161 07:56:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:20.161 07:56:31 -- scripts/common.sh@365 -- # decimal 2 00:05:20.161 07:56:31 -- scripts/common.sh@352 -- # local d=2 00:05:20.161 07:56:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.161 07:56:31 -- scripts/common.sh@354 -- # echo 2 00:05:20.161 07:56:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:20.161 07:56:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:20.161 07:56:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:20.161 07:56:31 -- scripts/common.sh@367 -- # return 0 00:05:20.161 07:56:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.161 07:56:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:20.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.161 --rc genhtml_branch_coverage=1 00:05:20.161 --rc genhtml_function_coverage=1 00:05:20.161 --rc genhtml_legend=1 00:05:20.161 --rc geninfo_all_blocks=1 00:05:20.161 --rc geninfo_unexecuted_blocks=1 00:05:20.161 00:05:20.161 ' 00:05:20.161 07:56:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:20.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.161 --rc genhtml_branch_coverage=1 00:05:20.161 --rc genhtml_function_coverage=1 00:05:20.161 --rc genhtml_legend=1 00:05:20.161 --rc geninfo_all_blocks=1 00:05:20.162 --rc geninfo_unexecuted_blocks=1 00:05:20.162 00:05:20.162 ' 00:05:20.162 07:56:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:20.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.162 --rc genhtml_branch_coverage=1 00:05:20.162 --rc genhtml_function_coverage=1 00:05:20.162 --rc genhtml_legend=1 00:05:20.162 --rc geninfo_all_blocks=1 00:05:20.162 --rc geninfo_unexecuted_blocks=1 00:05:20.162 00:05:20.162 ' 00:05:20.162 07:56:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:20.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.162 --rc genhtml_branch_coverage=1 00:05:20.162 --rc genhtml_function_coverage=1 00:05:20.162 --rc genhtml_legend=1 00:05:20.162 --rc geninfo_all_blocks=1 00:05:20.162 --rc geninfo_unexecuted_blocks=1 00:05:20.162 00:05:20.162 ' 00:05:20.162 07:56:31 -- setup/driver.sh@68 -- # setup reset 00:05:20.162 07:56:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:20.162 07:56:31 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:20.727 07:56:31 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:20.727 07:56:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.727 07:56:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.727 07:56:31 -- common/autotest_common.sh@10 -- # set +x 00:05:20.727 ************************************ 00:05:20.727 START TEST guess_driver 00:05:20.727 ************************************ 00:05:20.727 07:56:31 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:20.727 07:56:31 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:20.727 07:56:31 -- setup/driver.sh@47 -- # local fail=0 00:05:20.727 07:56:31 -- setup/driver.sh@49 -- # pick_driver 00:05:20.727 07:56:31 -- setup/driver.sh@36 -- # vfio 00:05:20.727 07:56:31 -- setup/driver.sh@21 -- # local iommu_grups 00:05:20.727 07:56:31 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:20.727 07:56:31 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:20.727 07:56:31 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:20.727 07:56:31 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:20.727 07:56:31 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:20.727 07:56:31 -- setup/driver.sh@32 -- # return 1 00:05:20.727 07:56:31 -- setup/driver.sh@38 -- # uio 00:05:20.727 07:56:31 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:20.727 07:56:31 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:20.727 07:56:31 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:20.727 07:56:31 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:20.727 07:56:31 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:20.727 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:20.727 07:56:31 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:20.727 Looking for driver=uio_pci_generic 00:05:20.727 07:56:31 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:20.727 07:56:31 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:20.727 07:56:31 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:20.727 07:56:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:20.727 07:56:31 -- setup/driver.sh@45 -- # setup output config 00:05:20.727 07:56:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.727 07:56:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:21.292 07:56:32 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:21.292 07:56:32 -- setup/driver.sh@58 -- # continue 00:05:21.292 07:56:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.292 07:56:32 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.292 07:56:32 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:21.292 07:56:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.292 07:56:32 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.292 07:56:32 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:21.292 07:56:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.549 07:56:32 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:21.550 07:56:32 -- setup/driver.sh@65 -- # setup reset 00:05:21.550 07:56:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:21.550 07:56:32 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:22.117 00:05:22.117 real 0m1.440s 00:05:22.117 user 0m0.565s 00:05:22.117 sys 0m0.882s 00:05:22.117 07:56:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.117 07:56:33 -- common/autotest_common.sh@10 -- # set +x 00:05:22.117 ************************************ 00:05:22.117 END TEST guess_driver 00:05:22.117 ************************************ 00:05:22.117 ************************************ 00:05:22.117 END TEST driver 00:05:22.117 ************************************ 00:05:22.117 00:05:22.117 real 0m2.237s 00:05:22.117 user 0m0.878s 00:05:22.117 sys 0m1.421s 00:05:22.117 07:56:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.117 07:56:33 -- common/autotest_common.sh@10 -- # set +x 00:05:22.117 07:56:33 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:22.117 07:56:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.117 07:56:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.117 07:56:33 -- common/autotest_common.sh@10 -- # set +x 00:05:22.117 ************************************ 00:05:22.117 START TEST devices 00:05:22.117 ************************************ 00:05:22.117 07:56:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:22.117 * Looking for test storage... 00:05:22.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:22.117 07:56:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:22.117 07:56:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:22.117 07:56:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:22.376 07:56:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:22.376 07:56:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:22.376 07:56:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:22.376 07:56:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:22.376 07:56:33 -- scripts/common.sh@335 -- # IFS=.-: 00:05:22.376 07:56:33 -- scripts/common.sh@335 -- # read -ra ver1 00:05:22.376 07:56:33 -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.376 07:56:33 -- scripts/common.sh@336 -- # read -ra ver2 00:05:22.376 07:56:33 -- scripts/common.sh@337 -- # local 'op=<' 00:05:22.376 07:56:33 -- scripts/common.sh@339 -- # ver1_l=2 00:05:22.376 07:56:33 -- scripts/common.sh@340 -- # ver2_l=1 00:05:22.376 07:56:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:22.376 07:56:33 -- scripts/common.sh@343 -- # case "$op" in 00:05:22.376 07:56:33 -- scripts/common.sh@344 -- # : 1 00:05:22.376 07:56:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:22.376 07:56:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.376 07:56:33 -- scripts/common.sh@364 -- # decimal 1 00:05:22.376 07:56:33 -- scripts/common.sh@352 -- # local d=1 00:05:22.376 07:56:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.376 07:56:33 -- scripts/common.sh@354 -- # echo 1 00:05:22.376 07:56:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:22.376 07:56:33 -- scripts/common.sh@365 -- # decimal 2 00:05:22.376 07:56:33 -- scripts/common.sh@352 -- # local d=2 00:05:22.376 07:56:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.376 07:56:33 -- scripts/common.sh@354 -- # echo 2 00:05:22.376 07:56:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:22.376 07:56:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:22.376 07:56:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:22.376 07:56:33 -- scripts/common.sh@367 -- # return 0 00:05:22.376 07:56:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.376 07:56:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:22.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.376 --rc genhtml_branch_coverage=1 00:05:22.376 --rc genhtml_function_coverage=1 00:05:22.376 --rc genhtml_legend=1 00:05:22.376 --rc geninfo_all_blocks=1 00:05:22.376 --rc geninfo_unexecuted_blocks=1 00:05:22.376 00:05:22.376 ' 00:05:22.376 07:56:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:22.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.376 --rc genhtml_branch_coverage=1 00:05:22.376 --rc genhtml_function_coverage=1 00:05:22.376 --rc genhtml_legend=1 00:05:22.376 --rc geninfo_all_blocks=1 00:05:22.376 --rc geninfo_unexecuted_blocks=1 00:05:22.376 00:05:22.376 ' 00:05:22.376 07:56:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:22.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.376 --rc genhtml_branch_coverage=1 00:05:22.376 --rc genhtml_function_coverage=1 00:05:22.376 --rc genhtml_legend=1 00:05:22.376 --rc geninfo_all_blocks=1 00:05:22.376 --rc geninfo_unexecuted_blocks=1 00:05:22.376 00:05:22.376 ' 00:05:22.376 07:56:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:22.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.376 --rc genhtml_branch_coverage=1 00:05:22.376 --rc genhtml_function_coverage=1 00:05:22.376 --rc genhtml_legend=1 00:05:22.376 --rc geninfo_all_blocks=1 00:05:22.376 --rc geninfo_unexecuted_blocks=1 00:05:22.376 00:05:22.376 ' 00:05:22.376 07:56:33 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:22.376 07:56:33 -- setup/devices.sh@192 -- # setup reset 00:05:22.376 07:56:33 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:22.376 07:56:33 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:23.312 07:56:34 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:23.312 07:56:34 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:23.312 07:56:34 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:23.312 07:56:34 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:23.312 07:56:34 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:23.312 07:56:34 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:23.312 07:56:34 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:23.312 07:56:34 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:23.312 07:56:34 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:23.312 07:56:34 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:23.312 07:56:34 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:23.312 07:56:34 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:23.312 07:56:34 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:23.312 07:56:34 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:23.312 07:56:34 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:23.312 07:56:34 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:23.312 07:56:34 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:23.312 07:56:34 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:23.312 07:56:34 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:23.312 07:56:34 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:23.312 07:56:34 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:23.312 07:56:34 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:23.312 07:56:34 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:23.312 07:56:34 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:23.312 07:56:34 -- setup/devices.sh@196 -- # blocks=() 00:05:23.312 07:56:34 -- setup/devices.sh@196 -- # declare -a blocks 00:05:23.312 07:56:34 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:23.312 07:56:34 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:23.312 07:56:34 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:23.312 07:56:34 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:23.312 07:56:34 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:23.313 07:56:34 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:23.313 07:56:34 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:23.313 07:56:34 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:23.313 07:56:34 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:23.313 07:56:34 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:23.313 07:56:34 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:23.313 No valid GPT data, bailing 00:05:23.313 07:56:34 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:23.313 07:56:34 -- scripts/common.sh@393 -- # pt= 00:05:23.313 07:56:34 -- scripts/common.sh@394 -- # return 1 00:05:23.313 07:56:34 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:23.313 07:56:34 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:23.313 07:56:34 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:23.313 07:56:34 -- setup/common.sh@80 -- # echo 5368709120 00:05:23.313 07:56:34 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:23.313 07:56:34 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:23.313 07:56:34 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:23.313 07:56:34 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:23.313 07:56:34 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:23.313 07:56:34 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:23.313 07:56:34 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:23.313 07:56:34 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:23.313 07:56:34 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:23.313 07:56:34 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:23.313 07:56:34 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:23.313 No valid GPT data, bailing 00:05:23.313 07:56:34 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:23.313 07:56:34 -- scripts/common.sh@393 -- # pt= 00:05:23.313 07:56:34 -- scripts/common.sh@394 -- # return 1 00:05:23.313 07:56:34 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:23.313 07:56:34 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:23.313 07:56:34 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:23.313 07:56:34 -- setup/common.sh@80 -- # echo 4294967296 00:05:23.313 07:56:34 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:23.313 07:56:34 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:23.313 07:56:34 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:23.313 07:56:34 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:23.313 07:56:34 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:23.313 07:56:34 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:23.313 07:56:34 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:23.313 07:56:34 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:23.313 07:56:34 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:23.313 07:56:34 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:23.313 07:56:34 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:23.313 No valid GPT data, bailing 00:05:23.313 07:56:34 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:23.313 07:56:34 -- scripts/common.sh@393 -- # pt= 00:05:23.313 07:56:34 -- scripts/common.sh@394 -- # return 1 00:05:23.313 07:56:34 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:23.313 07:56:34 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:23.313 07:56:34 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:23.313 07:56:34 -- setup/common.sh@80 -- # echo 4294967296 00:05:23.313 07:56:34 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:23.313 07:56:34 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:23.313 07:56:34 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:23.313 07:56:34 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:23.313 07:56:34 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:23.313 07:56:34 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:23.313 07:56:34 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:23.313 07:56:34 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:23.313 07:56:34 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:23.313 07:56:34 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:23.313 07:56:34 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:23.313 No valid GPT data, bailing 00:05:23.313 07:56:34 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:23.313 07:56:34 -- scripts/common.sh@393 -- # pt= 00:05:23.313 07:56:34 -- scripts/common.sh@394 -- # return 1 00:05:23.313 07:56:34 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:23.313 07:56:34 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:23.313 07:56:34 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:23.313 07:56:34 -- setup/common.sh@80 -- # echo 4294967296 00:05:23.313 07:56:34 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:23.313 07:56:34 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:23.313 07:56:34 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:23.313 07:56:34 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:23.313 07:56:34 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:23.313 07:56:34 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:23.313 07:56:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.313 07:56:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.313 07:56:34 -- common/autotest_common.sh@10 -- # set +x 00:05:23.313 ************************************ 00:05:23.313 START TEST nvme_mount 00:05:23.313 ************************************ 00:05:23.313 07:56:34 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:23.313 07:56:34 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:23.313 07:56:34 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:23.313 07:56:34 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.313 07:56:34 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:23.313 07:56:34 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:23.313 07:56:34 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:23.313 07:56:34 -- setup/common.sh@40 -- # local part_no=1 00:05:23.313 07:56:34 -- setup/common.sh@41 -- # local size=1073741824 00:05:23.313 07:56:34 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:23.313 07:56:34 -- setup/common.sh@44 -- # parts=() 00:05:23.313 07:56:34 -- setup/common.sh@44 -- # local parts 00:05:23.313 07:56:34 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:23.313 07:56:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:23.313 07:56:34 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:23.313 07:56:34 -- setup/common.sh@46 -- # (( part++ )) 00:05:23.313 07:56:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:23.313 07:56:34 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:23.313 07:56:34 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:23.313 07:56:34 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:24.686 Creating new GPT entries in memory. 00:05:24.686 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:24.686 other utilities. 00:05:24.686 07:56:35 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:24.686 07:56:35 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:24.686 07:56:35 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:24.686 07:56:35 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:24.686 07:56:35 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:25.620 Creating new GPT entries in memory. 00:05:25.620 The operation has completed successfully. 00:05:25.620 07:56:36 -- setup/common.sh@57 -- # (( part++ )) 00:05:25.620 07:56:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:25.620 07:56:36 -- setup/common.sh@62 -- # wait 65871 00:05:25.620 07:56:36 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.620 07:56:36 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:25.620 07:56:36 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.620 07:56:36 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:25.620 07:56:36 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:25.620 07:56:36 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.620 07:56:36 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:25.620 07:56:36 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:25.620 07:56:36 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:25.620 07:56:36 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.620 07:56:36 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:25.620 07:56:36 -- setup/devices.sh@53 -- # local found=0 00:05:25.620 07:56:36 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:25.620 07:56:36 -- setup/devices.sh@56 -- # : 00:05:25.620 07:56:36 -- setup/devices.sh@59 -- # local pci status 00:05:25.620 07:56:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.620 07:56:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:25.620 07:56:36 -- setup/devices.sh@47 -- # setup output config 00:05:25.620 07:56:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.620 07:56:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:25.620 07:56:36 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:25.620 07:56:36 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:25.620 07:56:36 -- setup/devices.sh@63 -- # found=1 00:05:25.620 07:56:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.620 07:56:36 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:25.620 07:56:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.188 07:56:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.188 07:56:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.188 07:56:37 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.189 07:56:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.189 07:56:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:26.189 07:56:37 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:26.189 07:56:37 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.189 07:56:37 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:26.189 07:56:37 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:26.189 07:56:37 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:26.189 07:56:37 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.189 07:56:37 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.189 07:56:37 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:26.189 07:56:37 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:26.189 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:26.189 07:56:37 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:26.189 07:56:37 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:26.447 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:26.447 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:26.447 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:26.447 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:26.447 07:56:37 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:26.447 07:56:37 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:26.447 07:56:37 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.447 07:56:37 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:26.447 07:56:37 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:26.447 07:56:37 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.447 07:56:37 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:26.447 07:56:37 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:26.447 07:56:37 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:26.447 07:56:37 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.447 07:56:37 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:26.447 07:56:37 -- setup/devices.sh@53 -- # local found=0 00:05:26.447 07:56:37 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:26.447 07:56:37 -- setup/devices.sh@56 -- # : 00:05:26.447 07:56:37 -- setup/devices.sh@59 -- # local pci status 00:05:26.447 07:56:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.447 07:56:37 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:26.447 07:56:37 -- setup/devices.sh@47 -- # setup output config 00:05:26.447 07:56:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.447 07:56:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:26.705 07:56:37 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.705 07:56:37 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:26.705 07:56:37 -- setup/devices.sh@63 -- # found=1 00:05:26.705 07:56:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.705 07:56:37 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.705 07:56:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.964 07:56:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.964 07:56:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.964 07:56:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.964 07:56:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.223 07:56:38 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:27.223 07:56:38 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:27.223 07:56:38 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.223 07:56:38 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:27.223 07:56:38 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:27.223 07:56:38 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.223 07:56:38 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:27.223 07:56:38 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:27.224 07:56:38 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:27.224 07:56:38 -- setup/devices.sh@50 -- # local mount_point= 00:05:27.224 07:56:38 -- setup/devices.sh@51 -- # local test_file= 00:05:27.224 07:56:38 -- setup/devices.sh@53 -- # local found=0 00:05:27.224 07:56:38 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:27.224 07:56:38 -- setup/devices.sh@59 -- # local pci status 00:05:27.224 07:56:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.224 07:56:38 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:27.224 07:56:38 -- setup/devices.sh@47 -- # setup output config 00:05:27.224 07:56:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.224 07:56:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:27.485 07:56:38 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.485 07:56:38 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:27.485 07:56:38 -- setup/devices.sh@63 -- # found=1 00:05:27.485 07:56:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.485 07:56:38 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.485 07:56:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.748 07:56:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.748 07:56:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.748 07:56:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.748 07:56:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.007 07:56:39 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:28.007 07:56:39 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:28.007 07:56:39 -- setup/devices.sh@68 -- # return 0 00:05:28.007 07:56:39 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:28.007 07:56:39 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.007 07:56:39 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:28.007 07:56:39 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:28.007 07:56:39 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:28.007 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:28.007 00:05:28.007 real 0m4.497s 00:05:28.007 user 0m1.031s 00:05:28.007 sys 0m1.150s 00:05:28.007 07:56:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:28.007 07:56:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.007 ************************************ 00:05:28.007 END TEST nvme_mount 00:05:28.007 ************************************ 00:05:28.007 07:56:39 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:28.007 07:56:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.007 07:56:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.007 07:56:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.007 ************************************ 00:05:28.007 START TEST dm_mount 00:05:28.007 ************************************ 00:05:28.007 07:56:39 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:28.007 07:56:39 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:28.007 07:56:39 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:28.007 07:56:39 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:28.007 07:56:39 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:28.007 07:56:39 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:28.007 07:56:39 -- setup/common.sh@40 -- # local part_no=2 00:05:28.007 07:56:39 -- setup/common.sh@41 -- # local size=1073741824 00:05:28.007 07:56:39 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:28.007 07:56:39 -- setup/common.sh@44 -- # parts=() 00:05:28.007 07:56:39 -- setup/common.sh@44 -- # local parts 00:05:28.007 07:56:39 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:28.007 07:56:39 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:28.007 07:56:39 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:28.007 07:56:39 -- setup/common.sh@46 -- # (( part++ )) 00:05:28.007 07:56:39 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:28.007 07:56:39 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:28.007 07:56:39 -- setup/common.sh@46 -- # (( part++ )) 00:05:28.007 07:56:39 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:28.007 07:56:39 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:28.007 07:56:39 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:28.007 07:56:39 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:28.943 Creating new GPT entries in memory. 00:05:28.943 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:28.943 other utilities. 00:05:28.943 07:56:40 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:28.943 07:56:40 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:28.943 07:56:40 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:28.943 07:56:40 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:28.943 07:56:40 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:30.319 Creating new GPT entries in memory. 00:05:30.319 The operation has completed successfully. 00:05:30.319 07:56:41 -- setup/common.sh@57 -- # (( part++ )) 00:05:30.319 07:56:41 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:30.319 07:56:41 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:30.319 07:56:41 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:30.319 07:56:41 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:31.255 The operation has completed successfully. 00:05:31.255 07:56:42 -- setup/common.sh@57 -- # (( part++ )) 00:05:31.255 07:56:42 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:31.255 07:56:42 -- setup/common.sh@62 -- # wait 66331 00:05:31.255 07:56:42 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:31.256 07:56:42 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.256 07:56:42 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:31.256 07:56:42 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:31.256 07:56:42 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:31.256 07:56:42 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:31.256 07:56:42 -- setup/devices.sh@161 -- # break 00:05:31.256 07:56:42 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:31.256 07:56:42 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:31.256 07:56:42 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:31.256 07:56:42 -- setup/devices.sh@166 -- # dm=dm-0 00:05:31.256 07:56:42 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:31.256 07:56:42 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:31.256 07:56:42 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.256 07:56:42 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:31.256 07:56:42 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.256 07:56:42 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:31.256 07:56:42 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:31.256 07:56:42 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.256 07:56:42 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:31.256 07:56:42 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:31.256 07:56:42 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:31.256 07:56:42 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.256 07:56:42 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:31.256 07:56:42 -- setup/devices.sh@53 -- # local found=0 00:05:31.256 07:56:42 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:31.256 07:56:42 -- setup/devices.sh@56 -- # : 00:05:31.256 07:56:42 -- setup/devices.sh@59 -- # local pci status 00:05:31.256 07:56:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.256 07:56:42 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:31.256 07:56:42 -- setup/devices.sh@47 -- # setup output config 00:05:31.256 07:56:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.256 07:56:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:31.256 07:56:42 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.256 07:56:42 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:31.256 07:56:42 -- setup/devices.sh@63 -- # found=1 00:05:31.256 07:56:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.256 07:56:42 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.256 07:56:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.514 07:56:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.514 07:56:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.773 07:56:42 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.773 07:56:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.773 07:56:42 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:31.773 07:56:42 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:31.773 07:56:42 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.773 07:56:42 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:31.773 07:56:42 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:31.773 07:56:42 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.773 07:56:42 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:31.773 07:56:42 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:31.773 07:56:42 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:31.773 07:56:42 -- setup/devices.sh@50 -- # local mount_point= 00:05:31.773 07:56:42 -- setup/devices.sh@51 -- # local test_file= 00:05:31.773 07:56:42 -- setup/devices.sh@53 -- # local found=0 00:05:31.773 07:56:42 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:31.773 07:56:42 -- setup/devices.sh@59 -- # local pci status 00:05:31.773 07:56:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.773 07:56:42 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:31.773 07:56:42 -- setup/devices.sh@47 -- # setup output config 00:05:31.773 07:56:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.773 07:56:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:32.032 07:56:43 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.032 07:56:43 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:32.032 07:56:43 -- setup/devices.sh@63 -- # found=1 00:05:32.032 07:56:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.032 07:56:43 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.032 07:56:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.291 07:56:43 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.291 07:56:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.291 07:56:43 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.291 07:56:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.549 07:56:43 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:32.549 07:56:43 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:32.549 07:56:43 -- setup/devices.sh@68 -- # return 0 00:05:32.549 07:56:43 -- setup/devices.sh@187 -- # cleanup_dm 00:05:32.549 07:56:43 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:32.549 07:56:43 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:32.549 07:56:43 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:32.549 07:56:43 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:32.549 07:56:43 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:32.549 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:32.550 07:56:43 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:32.550 07:56:43 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:32.550 00:05:32.550 real 0m4.541s 00:05:32.550 user 0m0.673s 00:05:32.550 sys 0m0.800s 00:05:32.550 07:56:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.550 ************************************ 00:05:32.550 END TEST dm_mount 00:05:32.550 ************************************ 00:05:32.550 07:56:43 -- common/autotest_common.sh@10 -- # set +x 00:05:32.550 07:56:43 -- setup/devices.sh@1 -- # cleanup 00:05:32.550 07:56:43 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:32.550 07:56:43 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:32.550 07:56:43 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:32.550 07:56:43 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:32.550 07:56:43 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:32.550 07:56:43 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:32.808 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:32.808 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:32.808 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:32.808 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:32.808 07:56:43 -- setup/devices.sh@12 -- # cleanup_dm 00:05:32.808 07:56:43 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:32.808 07:56:43 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:32.808 07:56:43 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:32.808 07:56:43 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:32.808 07:56:43 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:32.808 07:56:43 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:32.808 00:05:32.808 real 0m10.695s 00:05:32.808 user 0m2.452s 00:05:32.808 sys 0m2.570s 00:05:32.808 07:56:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.808 07:56:43 -- common/autotest_common.sh@10 -- # set +x 00:05:32.808 ************************************ 00:05:32.808 END TEST devices 00:05:32.808 ************************************ 00:05:32.808 00:05:32.808 real 0m22.651s 00:05:32.808 user 0m7.868s 00:05:32.808 sys 0m9.118s 00:05:32.808 07:56:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.808 07:56:44 -- common/autotest_common.sh@10 -- # set +x 00:05:32.808 ************************************ 00:05:32.808 END TEST setup.sh 00:05:32.808 ************************************ 00:05:32.808 07:56:44 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:33.067 Hugepages 00:05:33.067 node hugesize free / total 00:05:33.067 node0 1048576kB 0 / 0 00:05:33.067 node0 2048kB 2048 / 2048 00:05:33.067 00:05:33.067 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:33.067 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:33.325 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:33.325 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:33.325 07:56:44 -- spdk/autotest.sh@128 -- # uname -s 00:05:33.325 07:56:44 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:33.325 07:56:44 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:33.325 07:56:44 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:33.891 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.149 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.149 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.149 07:56:45 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:35.082 07:56:46 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:35.082 07:56:46 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:35.082 07:56:46 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:35.082 07:56:46 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:35.082 07:56:46 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:35.082 07:56:46 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:35.082 07:56:46 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:35.082 07:56:46 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:35.082 07:56:46 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:35.340 07:56:46 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:35.340 07:56:46 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:35.340 07:56:46 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:35.598 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.598 Waiting for block devices as requested 00:05:35.598 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:35.598 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:35.856 07:56:46 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:35.856 07:56:46 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:35.856 07:56:46 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:35.856 07:56:46 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:35.856 07:56:46 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:35.856 07:56:46 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:35.856 07:56:46 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:35.856 07:56:46 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:35.856 07:56:46 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:35.856 07:56:46 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:35.856 07:56:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:35.856 07:56:46 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:35.856 07:56:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:35.856 07:56:46 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:35.856 07:56:46 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:35.856 07:56:46 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:35.856 07:56:46 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:35.856 07:56:46 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:35.856 07:56:46 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:35.856 07:56:46 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:35.856 07:56:46 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:35.856 07:56:46 -- common/autotest_common.sh@1552 -- # continue 00:05:35.856 07:56:46 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:35.856 07:56:46 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:35.856 07:56:46 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:35.856 07:56:46 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:35.856 07:56:46 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:35.856 07:56:46 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:35.856 07:56:46 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:35.856 07:56:46 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:35.856 07:56:46 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:35.856 07:56:46 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:35.856 07:56:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:35.856 07:56:46 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:35.856 07:56:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:35.856 07:56:46 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:35.856 07:56:46 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:35.857 07:56:46 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:35.857 07:56:46 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:35.857 07:56:46 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:35.857 07:56:46 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:35.857 07:56:46 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:35.857 07:56:46 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:35.857 07:56:46 -- common/autotest_common.sh@1552 -- # continue 00:05:35.857 07:56:46 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:35.857 07:56:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:35.857 07:56:46 -- common/autotest_common.sh@10 -- # set +x 00:05:35.857 07:56:47 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:35.857 07:56:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:35.857 07:56:47 -- common/autotest_common.sh@10 -- # set +x 00:05:35.857 07:56:47 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:36.424 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.682 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.682 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.682 07:56:47 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:36.682 07:56:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:36.682 07:56:47 -- common/autotest_common.sh@10 -- # set +x 00:05:36.940 07:56:47 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:36.940 07:56:47 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:36.940 07:56:47 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:36.940 07:56:47 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:36.940 07:56:47 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:36.940 07:56:47 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:36.940 07:56:47 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:36.940 07:56:47 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:36.940 07:56:47 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:36.940 07:56:47 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:36.940 07:56:47 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:36.940 07:56:48 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:36.940 07:56:48 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:36.940 07:56:48 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:36.940 07:56:48 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:36.940 07:56:48 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:36.940 07:56:48 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:36.940 07:56:48 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:36.940 07:56:48 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:36.940 07:56:48 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:36.940 07:56:48 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:36.940 07:56:48 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:36.940 07:56:48 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:36.940 07:56:48 -- common/autotest_common.sh@1588 -- # return 0 00:05:36.940 07:56:48 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:36.940 07:56:48 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:36.940 07:56:48 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:36.940 07:56:48 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:36.940 07:56:48 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:36.940 07:56:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:36.940 07:56:48 -- common/autotest_common.sh@10 -- # set +x 00:05:36.940 07:56:48 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:36.940 07:56:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.940 07:56:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.941 07:56:48 -- common/autotest_common.sh@10 -- # set +x 00:05:36.941 ************************************ 00:05:36.941 START TEST env 00:05:36.941 ************************************ 00:05:36.941 07:56:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:36.941 * Looking for test storage... 00:05:36.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:36.941 07:56:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:36.941 07:56:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:36.941 07:56:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:37.199 07:56:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:37.199 07:56:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:37.199 07:56:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:37.199 07:56:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:37.199 07:56:48 -- scripts/common.sh@335 -- # IFS=.-: 00:05:37.199 07:56:48 -- scripts/common.sh@335 -- # read -ra ver1 00:05:37.199 07:56:48 -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.199 07:56:48 -- scripts/common.sh@336 -- # read -ra ver2 00:05:37.199 07:56:48 -- scripts/common.sh@337 -- # local 'op=<' 00:05:37.199 07:56:48 -- scripts/common.sh@339 -- # ver1_l=2 00:05:37.199 07:56:48 -- scripts/common.sh@340 -- # ver2_l=1 00:05:37.199 07:56:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:37.199 07:56:48 -- scripts/common.sh@343 -- # case "$op" in 00:05:37.199 07:56:48 -- scripts/common.sh@344 -- # : 1 00:05:37.199 07:56:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:37.199 07:56:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.199 07:56:48 -- scripts/common.sh@364 -- # decimal 1 00:05:37.199 07:56:48 -- scripts/common.sh@352 -- # local d=1 00:05:37.199 07:56:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.199 07:56:48 -- scripts/common.sh@354 -- # echo 1 00:05:37.199 07:56:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:37.199 07:56:48 -- scripts/common.sh@365 -- # decimal 2 00:05:37.199 07:56:48 -- scripts/common.sh@352 -- # local d=2 00:05:37.199 07:56:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.199 07:56:48 -- scripts/common.sh@354 -- # echo 2 00:05:37.199 07:56:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:37.199 07:56:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:37.199 07:56:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:37.199 07:56:48 -- scripts/common.sh@367 -- # return 0 00:05:37.199 07:56:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.199 07:56:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:37.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.199 --rc genhtml_branch_coverage=1 00:05:37.199 --rc genhtml_function_coverage=1 00:05:37.199 --rc genhtml_legend=1 00:05:37.199 --rc geninfo_all_blocks=1 00:05:37.199 --rc geninfo_unexecuted_blocks=1 00:05:37.199 00:05:37.199 ' 00:05:37.199 07:56:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:37.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.199 --rc genhtml_branch_coverage=1 00:05:37.199 --rc genhtml_function_coverage=1 00:05:37.199 --rc genhtml_legend=1 00:05:37.199 --rc geninfo_all_blocks=1 00:05:37.199 --rc geninfo_unexecuted_blocks=1 00:05:37.199 00:05:37.199 ' 00:05:37.199 07:56:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:37.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.199 --rc genhtml_branch_coverage=1 00:05:37.199 --rc genhtml_function_coverage=1 00:05:37.199 --rc genhtml_legend=1 00:05:37.199 --rc geninfo_all_blocks=1 00:05:37.199 --rc geninfo_unexecuted_blocks=1 00:05:37.199 00:05:37.199 ' 00:05:37.199 07:56:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:37.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.199 --rc genhtml_branch_coverage=1 00:05:37.199 --rc genhtml_function_coverage=1 00:05:37.199 --rc genhtml_legend=1 00:05:37.199 --rc geninfo_all_blocks=1 00:05:37.199 --rc geninfo_unexecuted_blocks=1 00:05:37.199 00:05:37.199 ' 00:05:37.199 07:56:48 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:37.199 07:56:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.199 07:56:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.199 07:56:48 -- common/autotest_common.sh@10 -- # set +x 00:05:37.199 ************************************ 00:05:37.199 START TEST env_memory 00:05:37.199 ************************************ 00:05:37.199 07:56:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:37.199 00:05:37.199 00:05:37.199 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.199 http://cunit.sourceforge.net/ 00:05:37.199 00:05:37.199 00:05:37.199 Suite: memory 00:05:37.199 Test: alloc and free memory map ...[2024-12-07 07:56:48.311553] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:37.199 passed 00:05:37.199 Test: mem map translation ...[2024-12-07 07:56:48.342978] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:37.199 [2024-12-07 07:56:48.343248] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:37.199 [2024-12-07 07:56:48.343502] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:37.199 [2024-12-07 07:56:48.343654] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:37.199 passed 00:05:37.199 Test: mem map registration ...[2024-12-07 07:56:48.407920] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:37.199 [2024-12-07 07:56:48.408158] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:37.199 passed 00:05:37.459 Test: mem map adjacent registrations ...passed 00:05:37.459 00:05:37.459 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.459 suites 1 1 n/a 0 0 00:05:37.459 tests 4 4 4 0 0 00:05:37.459 asserts 152 152 152 0 n/a 00:05:37.459 00:05:37.459 Elapsed time = 0.214 seconds 00:05:37.459 00:05:37.459 real 0m0.236s 00:05:37.459 user 0m0.218s 00:05:37.459 sys 0m0.012s 00:05:37.459 07:56:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.459 07:56:48 -- common/autotest_common.sh@10 -- # set +x 00:05:37.459 ************************************ 00:05:37.459 END TEST env_memory 00:05:37.459 ************************************ 00:05:37.459 07:56:48 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:37.459 07:56:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.459 07:56:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.459 07:56:48 -- common/autotest_common.sh@10 -- # set +x 00:05:37.459 ************************************ 00:05:37.459 START TEST env_vtophys 00:05:37.459 ************************************ 00:05:37.459 07:56:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:37.459 EAL: lib.eal log level changed from notice to debug 00:05:37.459 EAL: Detected lcore 0 as core 0 on socket 0 00:05:37.459 EAL: Detected lcore 1 as core 0 on socket 0 00:05:37.459 EAL: Detected lcore 2 as core 0 on socket 0 00:05:37.459 EAL: Detected lcore 3 as core 0 on socket 0 00:05:37.459 EAL: Detected lcore 4 as core 0 on socket 0 00:05:37.459 EAL: Detected lcore 5 as core 0 on socket 0 00:05:37.459 EAL: Detected lcore 6 as core 0 on socket 0 00:05:37.459 EAL: Detected lcore 7 as core 0 on socket 0 00:05:37.459 EAL: Detected lcore 8 as core 0 on socket 0 00:05:37.459 EAL: Detected lcore 9 as core 0 on socket 0 00:05:37.459 EAL: Maximum logical cores by configuration: 128 00:05:37.459 EAL: Detected CPU lcores: 10 00:05:37.459 EAL: Detected NUMA nodes: 1 00:05:37.459 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:37.459 EAL: Detected shared linkage of DPDK 00:05:37.459 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:37.459 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:37.459 EAL: Registered [vdev] bus. 00:05:37.459 EAL: bus.vdev log level changed from disabled to notice 00:05:37.459 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:37.459 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:37.459 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:37.459 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:37.459 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:37.459 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:37.459 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:37.459 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:37.459 EAL: No shared files mode enabled, IPC will be disabled 00:05:37.459 EAL: No shared files mode enabled, IPC is disabled 00:05:37.459 EAL: Selected IOVA mode 'PA' 00:05:37.459 EAL: Probing VFIO support... 00:05:37.459 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:37.459 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:37.459 EAL: Ask a virtual area of 0x2e000 bytes 00:05:37.459 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:37.459 EAL: Setting up physically contiguous memory... 00:05:37.459 EAL: Setting maximum number of open files to 524288 00:05:37.459 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:37.459 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:37.459 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.459 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:37.459 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.459 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.459 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:37.459 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:37.459 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.459 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:37.459 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.459 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.459 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:37.459 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:37.459 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.459 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:37.459 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.459 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.459 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:37.459 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:37.459 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.459 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:37.459 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.459 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.459 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:37.459 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:37.459 EAL: Hugepages will be freed exactly as allocated. 00:05:37.459 EAL: No shared files mode enabled, IPC is disabled 00:05:37.459 EAL: No shared files mode enabled, IPC is disabled 00:05:37.459 EAL: TSC frequency is ~2200000 KHz 00:05:37.459 EAL: Main lcore 0 is ready (tid=7fbc1e614a00;cpuset=[0]) 00:05:37.459 EAL: Trying to obtain current memory policy. 00:05:37.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.459 EAL: Restoring previous memory policy: 0 00:05:37.459 EAL: request: mp_malloc_sync 00:05:37.459 EAL: No shared files mode enabled, IPC is disabled 00:05:37.459 EAL: Heap on socket 0 was expanded by 2MB 00:05:37.459 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:37.459 EAL: No shared files mode enabled, IPC is disabled 00:05:37.459 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:37.459 EAL: Mem event callback 'spdk:(nil)' registered 00:05:37.459 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:37.459 00:05:37.459 00:05:37.459 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.459 http://cunit.sourceforge.net/ 00:05:37.459 00:05:37.459 00:05:37.459 Suite: components_suite 00:05:37.459 Test: vtophys_malloc_test ...passed 00:05:37.459 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:37.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.459 EAL: Restoring previous memory policy: 4 00:05:37.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.459 EAL: request: mp_malloc_sync 00:05:37.459 EAL: No shared files mode enabled, IPC is disabled 00:05:37.459 EAL: Heap on socket 0 was expanded by 4MB 00:05:37.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.459 EAL: request: mp_malloc_sync 00:05:37.459 EAL: No shared files mode enabled, IPC is disabled 00:05:37.459 EAL: Heap on socket 0 was shrunk by 4MB 00:05:37.459 EAL: Trying to obtain current memory policy. 00:05:37.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.459 EAL: Restoring previous memory policy: 4 00:05:37.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.459 EAL: request: mp_malloc_sync 00:05:37.459 EAL: No shared files mode enabled, IPC is disabled 00:05:37.459 EAL: Heap on socket 0 was expanded by 6MB 00:05:37.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.459 EAL: request: mp_malloc_sync 00:05:37.459 EAL: No shared files mode enabled, IPC is disabled 00:05:37.459 EAL: Heap on socket 0 was shrunk by 6MB 00:05:37.459 EAL: Trying to obtain current memory policy. 00:05:37.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.459 EAL: Restoring previous memory policy: 4 00:05:37.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.459 EAL: request: mp_malloc_sync 00:05:37.459 EAL: No shared files mode enabled, IPC is disabled 00:05:37.459 EAL: Heap on socket 0 was expanded by 10MB 00:05:37.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.459 EAL: request: mp_malloc_sync 00:05:37.459 EAL: No shared files mode enabled, IPC is disabled 00:05:37.459 EAL: Heap on socket 0 was shrunk by 10MB 00:05:37.459 EAL: Trying to obtain current memory policy. 00:05:37.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.459 EAL: Restoring previous memory policy: 4 00:05:37.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.459 EAL: request: mp_malloc_sync 00:05:37.459 EAL: No shared files mode enabled, IPC is disabled 00:05:37.459 EAL: Heap on socket 0 was expanded by 18MB 00:05:37.459 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.459 EAL: request: mp_malloc_sync 00:05:37.459 EAL: No shared files mode enabled, IPC is disabled 00:05:37.459 EAL: Heap on socket 0 was shrunk by 18MB 00:05:37.459 EAL: Trying to obtain current memory policy. 00:05:37.459 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.719 EAL: Restoring previous memory policy: 4 00:05:37.719 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.719 EAL: request: mp_malloc_sync 00:05:37.719 EAL: No shared files mode enabled, IPC is disabled 00:05:37.719 EAL: Heap on socket 0 was expanded by 34MB 00:05:37.719 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.719 EAL: request: mp_malloc_sync 00:05:37.719 EAL: No shared files mode enabled, IPC is disabled 00:05:37.719 EAL: Heap on socket 0 was shrunk by 34MB 00:05:37.719 EAL: Trying to obtain current memory policy. 00:05:37.719 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.719 EAL: Restoring previous memory policy: 4 00:05:37.719 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.719 EAL: request: mp_malloc_sync 00:05:37.719 EAL: No shared files mode enabled, IPC is disabled 00:05:37.719 EAL: Heap on socket 0 was expanded by 66MB 00:05:37.719 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.719 EAL: request: mp_malloc_sync 00:05:37.719 EAL: No shared files mode enabled, IPC is disabled 00:05:37.719 EAL: Heap on socket 0 was shrunk by 66MB 00:05:37.719 EAL: Trying to obtain current memory policy. 00:05:37.719 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.719 EAL: Restoring previous memory policy: 4 00:05:37.719 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.719 EAL: request: mp_malloc_sync 00:05:37.719 EAL: No shared files mode enabled, IPC is disabled 00:05:37.719 EAL: Heap on socket 0 was expanded by 130MB 00:05:37.719 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.719 EAL: request: mp_malloc_sync 00:05:37.719 EAL: No shared files mode enabled, IPC is disabled 00:05:37.719 EAL: Heap on socket 0 was shrunk by 130MB 00:05:37.719 EAL: Trying to obtain current memory policy. 00:05:37.719 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.719 EAL: Restoring previous memory policy: 4 00:05:37.719 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.719 EAL: request: mp_malloc_sync 00:05:37.719 EAL: No shared files mode enabled, IPC is disabled 00:05:37.719 EAL: Heap on socket 0 was expanded by 258MB 00:05:37.719 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.977 EAL: request: mp_malloc_sync 00:05:37.978 EAL: No shared files mode enabled, IPC is disabled 00:05:37.978 EAL: Heap on socket 0 was shrunk by 258MB 00:05:37.978 EAL: Trying to obtain current memory policy. 00:05:37.978 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.978 EAL: Restoring previous memory policy: 4 00:05:37.978 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.978 EAL: request: mp_malloc_sync 00:05:37.978 EAL: No shared files mode enabled, IPC is disabled 00:05:37.978 EAL: Heap on socket 0 was expanded by 514MB 00:05:37.978 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.236 EAL: request: mp_malloc_sync 00:05:38.236 EAL: No shared files mode enabled, IPC is disabled 00:05:38.236 EAL: Heap on socket 0 was shrunk by 514MB 00:05:38.236 EAL: Trying to obtain current memory policy. 00:05:38.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.494 EAL: Restoring previous memory policy: 4 00:05:38.494 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.494 EAL: request: mp_malloc_sync 00:05:38.494 EAL: No shared files mode enabled, IPC is disabled 00:05:38.494 EAL: Heap on socket 0 was expanded by 1026MB 00:05:38.752 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.752 passedEAL: request: mp_malloc_sync 00:05:38.752 EAL: No shared files mode enabled, IPC is disabled 00:05:38.752 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:38.752 00:05:38.752 00:05:38.752 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.752 suites 1 1 n/a 0 0 00:05:38.752 tests 2 2 2 0 0 00:05:38.752 asserts 5204 5204 5204 0 n/a 00:05:38.752 00:05:38.752 Elapsed time = 1.238 seconds 00:05:38.752 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.752 EAL: request: mp_malloc_sync 00:05:38.752 EAL: No shared files mode enabled, IPC is disabled 00:05:38.752 EAL: Heap on socket 0 was shrunk by 2MB 00:05:38.752 EAL: No shared files mode enabled, IPC is disabled 00:05:38.752 EAL: No shared files mode enabled, IPC is disabled 00:05:38.752 EAL: No shared files mode enabled, IPC is disabled 00:05:38.752 00:05:38.752 real 0m1.445s 00:05:38.752 user 0m0.793s 00:05:38.752 sys 0m0.513s 00:05:38.752 07:56:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.752 ************************************ 00:05:38.752 END TEST env_vtophys 00:05:38.752 ************************************ 00:05:38.752 07:56:49 -- common/autotest_common.sh@10 -- # set +x 00:05:39.011 07:56:50 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:39.011 07:56:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.011 07:56:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.011 07:56:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.011 ************************************ 00:05:39.011 START TEST env_pci 00:05:39.011 ************************************ 00:05:39.011 07:56:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:39.011 00:05:39.011 00:05:39.011 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.011 http://cunit.sourceforge.net/ 00:05:39.011 00:05:39.011 00:05:39.011 Suite: pci 00:05:39.011 Test: pci_hook ...[2024-12-07 07:56:50.067280] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67470 has claimed it 00:05:39.011 passed 00:05:39.011 00:05:39.011 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.011 suites 1 1 n/a 0 0 00:05:39.011 tests 1 1 1 0 0 00:05:39.011 asserts 25 25 25 0 n/a 00:05:39.011 00:05:39.011 Elapsed time = 0.002 seconds 00:05:39.011 EAL: Cannot find device (10000:00:01.0) 00:05:39.011 EAL: Failed to attach device on primary process 00:05:39.011 00:05:39.011 real 0m0.021s 00:05:39.011 user 0m0.011s 00:05:39.011 sys 0m0.008s 00:05:39.011 ************************************ 00:05:39.011 END TEST env_pci 00:05:39.011 ************************************ 00:05:39.011 07:56:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.011 07:56:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.011 07:56:50 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:39.011 07:56:50 -- env/env.sh@15 -- # uname 00:05:39.011 07:56:50 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:39.011 07:56:50 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:39.011 07:56:50 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:39.011 07:56:50 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:39.011 07:56:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.011 07:56:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.011 ************************************ 00:05:39.011 START TEST env_dpdk_post_init 00:05:39.011 ************************************ 00:05:39.011 07:56:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:39.011 EAL: Detected CPU lcores: 10 00:05:39.011 EAL: Detected NUMA nodes: 1 00:05:39.011 EAL: Detected shared linkage of DPDK 00:05:39.011 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:39.011 EAL: Selected IOVA mode 'PA' 00:05:39.011 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.269 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:39.269 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:39.269 Starting DPDK initialization... 00:05:39.269 Starting SPDK post initialization... 00:05:39.269 SPDK NVMe probe 00:05:39.269 Attaching to 0000:00:06.0 00:05:39.269 Attaching to 0000:00:07.0 00:05:39.269 Attached to 0000:00:06.0 00:05:39.269 Attached to 0000:00:07.0 00:05:39.269 Cleaning up... 00:05:39.269 ************************************ 00:05:39.269 END TEST env_dpdk_post_init 00:05:39.269 ************************************ 00:05:39.269 00:05:39.269 real 0m0.177s 00:05:39.269 user 0m0.038s 00:05:39.269 sys 0m0.037s 00:05:39.269 07:56:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.269 07:56:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.269 07:56:50 -- env/env.sh@26 -- # uname 00:05:39.269 07:56:50 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:39.269 07:56:50 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:39.269 07:56:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.269 07:56:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.269 07:56:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.269 ************************************ 00:05:39.269 START TEST env_mem_callbacks 00:05:39.269 ************************************ 00:05:39.269 07:56:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:39.269 EAL: Detected CPU lcores: 10 00:05:39.269 EAL: Detected NUMA nodes: 1 00:05:39.269 EAL: Detected shared linkage of DPDK 00:05:39.269 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:39.269 EAL: Selected IOVA mode 'PA' 00:05:39.269 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.269 00:05:39.269 00:05:39.269 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.269 http://cunit.sourceforge.net/ 00:05:39.269 00:05:39.269 00:05:39.269 Suite: memory 00:05:39.269 Test: test ... 00:05:39.269 register 0x200000200000 2097152 00:05:39.269 malloc 3145728 00:05:39.269 register 0x200000400000 4194304 00:05:39.269 buf 0x200000500000 len 3145728 PASSED 00:05:39.269 malloc 64 00:05:39.269 buf 0x2000004fff40 len 64 PASSED 00:05:39.269 malloc 4194304 00:05:39.269 register 0x200000800000 6291456 00:05:39.269 buf 0x200000a00000 len 4194304 PASSED 00:05:39.269 free 0x200000500000 3145728 00:05:39.269 free 0x2000004fff40 64 00:05:39.269 unregister 0x200000400000 4194304 PASSED 00:05:39.269 free 0x200000a00000 4194304 00:05:39.269 unregister 0x200000800000 6291456 PASSED 00:05:39.269 malloc 8388608 00:05:39.269 register 0x200000400000 10485760 00:05:39.269 buf 0x200000600000 len 8388608 PASSED 00:05:39.269 free 0x200000600000 8388608 00:05:39.269 unregister 0x200000400000 10485760 PASSED 00:05:39.269 passed 00:05:39.269 00:05:39.269 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.269 suites 1 1 n/a 0 0 00:05:39.269 tests 1 1 1 0 0 00:05:39.269 asserts 15 15 15 0 n/a 00:05:39.269 00:05:39.269 Elapsed time = 0.009 seconds 00:05:39.270 ************************************ 00:05:39.270 END TEST env_mem_callbacks 00:05:39.270 ************************************ 00:05:39.270 00:05:39.270 real 0m0.149s 00:05:39.270 user 0m0.018s 00:05:39.270 sys 0m0.028s 00:05:39.270 07:56:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.270 07:56:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.528 ************************************ 00:05:39.528 END TEST env 00:05:39.528 ************************************ 00:05:39.528 00:05:39.528 real 0m2.492s 00:05:39.528 user 0m1.277s 00:05:39.528 sys 0m0.844s 00:05:39.528 07:56:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.528 07:56:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.528 07:56:50 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:39.528 07:56:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.528 07:56:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.528 07:56:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.528 ************************************ 00:05:39.528 START TEST rpc 00:05:39.528 ************************************ 00:05:39.528 07:56:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:39.528 * Looking for test storage... 00:05:39.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:39.528 07:56:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:39.528 07:56:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:39.528 07:56:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:39.528 07:56:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:39.528 07:56:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:39.528 07:56:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:39.528 07:56:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:39.528 07:56:50 -- scripts/common.sh@335 -- # IFS=.-: 00:05:39.528 07:56:50 -- scripts/common.sh@335 -- # read -ra ver1 00:05:39.528 07:56:50 -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.528 07:56:50 -- scripts/common.sh@336 -- # read -ra ver2 00:05:39.528 07:56:50 -- scripts/common.sh@337 -- # local 'op=<' 00:05:39.528 07:56:50 -- scripts/common.sh@339 -- # ver1_l=2 00:05:39.528 07:56:50 -- scripts/common.sh@340 -- # ver2_l=1 00:05:39.528 07:56:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:39.528 07:56:50 -- scripts/common.sh@343 -- # case "$op" in 00:05:39.528 07:56:50 -- scripts/common.sh@344 -- # : 1 00:05:39.528 07:56:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:39.528 07:56:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.528 07:56:50 -- scripts/common.sh@364 -- # decimal 1 00:05:39.528 07:56:50 -- scripts/common.sh@352 -- # local d=1 00:05:39.528 07:56:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.528 07:56:50 -- scripts/common.sh@354 -- # echo 1 00:05:39.528 07:56:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:39.528 07:56:50 -- scripts/common.sh@365 -- # decimal 2 00:05:39.528 07:56:50 -- scripts/common.sh@352 -- # local d=2 00:05:39.528 07:56:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.528 07:56:50 -- scripts/common.sh@354 -- # echo 2 00:05:39.528 07:56:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:39.528 07:56:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:39.528 07:56:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:39.528 07:56:50 -- scripts/common.sh@367 -- # return 0 00:05:39.528 07:56:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.528 07:56:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:39.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.528 --rc genhtml_branch_coverage=1 00:05:39.528 --rc genhtml_function_coverage=1 00:05:39.528 --rc genhtml_legend=1 00:05:39.528 --rc geninfo_all_blocks=1 00:05:39.528 --rc geninfo_unexecuted_blocks=1 00:05:39.528 00:05:39.528 ' 00:05:39.528 07:56:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:39.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.528 --rc genhtml_branch_coverage=1 00:05:39.528 --rc genhtml_function_coverage=1 00:05:39.528 --rc genhtml_legend=1 00:05:39.528 --rc geninfo_all_blocks=1 00:05:39.528 --rc geninfo_unexecuted_blocks=1 00:05:39.528 00:05:39.528 ' 00:05:39.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.528 07:56:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:39.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.528 --rc genhtml_branch_coverage=1 00:05:39.528 --rc genhtml_function_coverage=1 00:05:39.528 --rc genhtml_legend=1 00:05:39.528 --rc geninfo_all_blocks=1 00:05:39.528 --rc geninfo_unexecuted_blocks=1 00:05:39.528 00:05:39.528 ' 00:05:39.528 07:56:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:39.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.528 --rc genhtml_branch_coverage=1 00:05:39.528 --rc genhtml_function_coverage=1 00:05:39.528 --rc genhtml_legend=1 00:05:39.528 --rc geninfo_all_blocks=1 00:05:39.528 --rc geninfo_unexecuted_blocks=1 00:05:39.528 00:05:39.528 ' 00:05:39.528 07:56:50 -- rpc/rpc.sh@65 -- # spdk_pid=67587 00:05:39.528 07:56:50 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.528 07:56:50 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:39.528 07:56:50 -- rpc/rpc.sh@67 -- # waitforlisten 67587 00:05:39.528 07:56:50 -- common/autotest_common.sh@829 -- # '[' -z 67587 ']' 00:05:39.528 07:56:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.528 07:56:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.528 07:56:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.528 07:56:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.528 07:56:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.787 [2024-12-07 07:56:50.854864] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.787 [2024-12-07 07:56:50.855251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67587 ] 00:05:39.787 [2024-12-07 07:56:50.999157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.045 [2024-12-07 07:56:51.082320] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:40.045 [2024-12-07 07:56:51.082742] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:40.045 [2024-12-07 07:56:51.082798] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 67587' to capture a snapshot of events at runtime. 00:05:40.045 [2024-12-07 07:56:51.083008] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid67587 for offline analysis/debug. 00:05:40.045 [2024-12-07 07:56:51.083169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.610 07:56:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.610 07:56:51 -- common/autotest_common.sh@862 -- # return 0 00:05:40.610 07:56:51 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:40.610 07:56:51 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:40.610 07:56:51 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:40.610 07:56:51 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:40.610 07:56:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.610 07:56:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.610 07:56:51 -- common/autotest_common.sh@10 -- # set +x 00:05:40.610 ************************************ 00:05:40.610 START TEST rpc_integrity 00:05:40.610 ************************************ 00:05:40.610 07:56:51 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:40.867 07:56:51 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:40.867 07:56:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.867 07:56:51 -- common/autotest_common.sh@10 -- # set +x 00:05:40.867 07:56:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.867 07:56:51 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:40.867 07:56:51 -- rpc/rpc.sh@13 -- # jq length 00:05:40.867 07:56:51 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:40.867 07:56:51 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:40.867 07:56:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.867 07:56:51 -- common/autotest_common.sh@10 -- # set +x 00:05:40.867 07:56:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.867 07:56:51 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:40.867 07:56:51 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:40.867 07:56:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.867 07:56:51 -- common/autotest_common.sh@10 -- # set +x 00:05:40.867 07:56:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.867 07:56:51 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:40.867 { 00:05:40.867 "aliases": [ 00:05:40.867 "d8831fa0-cf20-40ee-a7dc-8782c4788981" 00:05:40.867 ], 00:05:40.867 "assigned_rate_limits": { 00:05:40.867 "r_mbytes_per_sec": 0, 00:05:40.867 "rw_ios_per_sec": 0, 00:05:40.867 "rw_mbytes_per_sec": 0, 00:05:40.867 "w_mbytes_per_sec": 0 00:05:40.867 }, 00:05:40.867 "block_size": 512, 00:05:40.867 "claimed": false, 00:05:40.867 "driver_specific": {}, 00:05:40.867 "memory_domains": [ 00:05:40.868 { 00:05:40.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.868 "dma_device_type": 2 00:05:40.868 } 00:05:40.868 ], 00:05:40.868 "name": "Malloc0", 00:05:40.868 "num_blocks": 16384, 00:05:40.868 "product_name": "Malloc disk", 00:05:40.868 "supported_io_types": { 00:05:40.868 "abort": true, 00:05:40.868 "compare": false, 00:05:40.868 "compare_and_write": false, 00:05:40.868 "flush": true, 00:05:40.868 "nvme_admin": false, 00:05:40.868 "nvme_io": false, 00:05:40.868 "read": true, 00:05:40.868 "reset": true, 00:05:40.868 "unmap": true, 00:05:40.868 "write": true, 00:05:40.868 "write_zeroes": true 00:05:40.868 }, 00:05:40.868 "uuid": "d8831fa0-cf20-40ee-a7dc-8782c4788981", 00:05:40.868 "zoned": false 00:05:40.868 } 00:05:40.868 ]' 00:05:40.868 07:56:51 -- rpc/rpc.sh@17 -- # jq length 00:05:40.868 07:56:52 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:40.868 07:56:52 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:40.868 07:56:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.868 07:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:40.868 [2024-12-07 07:56:52.039456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:40.868 [2024-12-07 07:56:52.039513] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:40.868 [2024-12-07 07:56:52.039533] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfffb60 00:05:40.868 [2024-12-07 07:56:52.039557] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:40.868 [2024-12-07 07:56:52.041239] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:40.868 [2024-12-07 07:56:52.041274] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:40.868 Passthru0 00:05:40.868 07:56:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.868 07:56:52 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:40.868 07:56:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.868 07:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:40.868 07:56:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.868 07:56:52 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:40.868 { 00:05:40.868 "aliases": [ 00:05:40.868 "d8831fa0-cf20-40ee-a7dc-8782c4788981" 00:05:40.868 ], 00:05:40.868 "assigned_rate_limits": { 00:05:40.868 "r_mbytes_per_sec": 0, 00:05:40.868 "rw_ios_per_sec": 0, 00:05:40.868 "rw_mbytes_per_sec": 0, 00:05:40.868 "w_mbytes_per_sec": 0 00:05:40.868 }, 00:05:40.868 "block_size": 512, 00:05:40.868 "claim_type": "exclusive_write", 00:05:40.868 "claimed": true, 00:05:40.868 "driver_specific": {}, 00:05:40.868 "memory_domains": [ 00:05:40.868 { 00:05:40.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.868 "dma_device_type": 2 00:05:40.868 } 00:05:40.868 ], 00:05:40.868 "name": "Malloc0", 00:05:40.868 "num_blocks": 16384, 00:05:40.868 "product_name": "Malloc disk", 00:05:40.868 "supported_io_types": { 00:05:40.868 "abort": true, 00:05:40.868 "compare": false, 00:05:40.868 "compare_and_write": false, 00:05:40.868 "flush": true, 00:05:40.868 "nvme_admin": false, 00:05:40.868 "nvme_io": false, 00:05:40.868 "read": true, 00:05:40.868 "reset": true, 00:05:40.868 "unmap": true, 00:05:40.868 "write": true, 00:05:40.868 "write_zeroes": true 00:05:40.868 }, 00:05:40.868 "uuid": "d8831fa0-cf20-40ee-a7dc-8782c4788981", 00:05:40.868 "zoned": false 00:05:40.868 }, 00:05:40.868 { 00:05:40.868 "aliases": [ 00:05:40.868 "02f66572-69aa-528a-ac97-f40d933dd1d1" 00:05:40.868 ], 00:05:40.868 "assigned_rate_limits": { 00:05:40.868 "r_mbytes_per_sec": 0, 00:05:40.868 "rw_ios_per_sec": 0, 00:05:40.868 "rw_mbytes_per_sec": 0, 00:05:40.868 "w_mbytes_per_sec": 0 00:05:40.868 }, 00:05:40.868 "block_size": 512, 00:05:40.868 "claimed": false, 00:05:40.868 "driver_specific": { 00:05:40.868 "passthru": { 00:05:40.868 "base_bdev_name": "Malloc0", 00:05:40.868 "name": "Passthru0" 00:05:40.868 } 00:05:40.868 }, 00:05:40.868 "memory_domains": [ 00:05:40.868 { 00:05:40.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:40.868 "dma_device_type": 2 00:05:40.868 } 00:05:40.868 ], 00:05:40.868 "name": "Passthru0", 00:05:40.868 "num_blocks": 16384, 00:05:40.868 "product_name": "passthru", 00:05:40.868 "supported_io_types": { 00:05:40.868 "abort": true, 00:05:40.868 "compare": false, 00:05:40.868 "compare_and_write": false, 00:05:40.868 "flush": true, 00:05:40.868 "nvme_admin": false, 00:05:40.868 "nvme_io": false, 00:05:40.868 "read": true, 00:05:40.868 "reset": true, 00:05:40.868 "unmap": true, 00:05:40.868 "write": true, 00:05:40.868 "write_zeroes": true 00:05:40.868 }, 00:05:40.868 "uuid": "02f66572-69aa-528a-ac97-f40d933dd1d1", 00:05:40.868 "zoned": false 00:05:40.868 } 00:05:40.868 ]' 00:05:40.868 07:56:52 -- rpc/rpc.sh@21 -- # jq length 00:05:40.868 07:56:52 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:40.868 07:56:52 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:40.868 07:56:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.868 07:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:40.868 07:56:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.868 07:56:52 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:40.868 07:56:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.868 07:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.125 07:56:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.125 07:56:52 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:41.125 07:56:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.125 07:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.125 07:56:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.125 07:56:52 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:41.125 07:56:52 -- rpc/rpc.sh@26 -- # jq length 00:05:41.125 ************************************ 00:05:41.125 END TEST rpc_integrity 00:05:41.125 ************************************ 00:05:41.125 07:56:52 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:41.125 00:05:41.125 real 0m0.322s 00:05:41.125 user 0m0.205s 00:05:41.125 sys 0m0.038s 00:05:41.125 07:56:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.125 07:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.125 07:56:52 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:41.126 07:56:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.126 07:56:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.126 07:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.126 ************************************ 00:05:41.126 START TEST rpc_plugins 00:05:41.126 ************************************ 00:05:41.126 07:56:52 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:41.126 07:56:52 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:41.126 07:56:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.126 07:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.126 07:56:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.126 07:56:52 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:41.126 07:56:52 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:41.126 07:56:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.126 07:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.126 07:56:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.126 07:56:52 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:41.126 { 00:05:41.126 "aliases": [ 00:05:41.126 "6977924c-1c43-4b03-b88a-58872c523797" 00:05:41.126 ], 00:05:41.126 "assigned_rate_limits": { 00:05:41.126 "r_mbytes_per_sec": 0, 00:05:41.126 "rw_ios_per_sec": 0, 00:05:41.126 "rw_mbytes_per_sec": 0, 00:05:41.126 "w_mbytes_per_sec": 0 00:05:41.126 }, 00:05:41.126 "block_size": 4096, 00:05:41.126 "claimed": false, 00:05:41.126 "driver_specific": {}, 00:05:41.126 "memory_domains": [ 00:05:41.126 { 00:05:41.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.126 "dma_device_type": 2 00:05:41.126 } 00:05:41.126 ], 00:05:41.126 "name": "Malloc1", 00:05:41.126 "num_blocks": 256, 00:05:41.126 "product_name": "Malloc disk", 00:05:41.126 "supported_io_types": { 00:05:41.126 "abort": true, 00:05:41.126 "compare": false, 00:05:41.126 "compare_and_write": false, 00:05:41.126 "flush": true, 00:05:41.126 "nvme_admin": false, 00:05:41.126 "nvme_io": false, 00:05:41.126 "read": true, 00:05:41.126 "reset": true, 00:05:41.126 "unmap": true, 00:05:41.126 "write": true, 00:05:41.126 "write_zeroes": true 00:05:41.126 }, 00:05:41.126 "uuid": "6977924c-1c43-4b03-b88a-58872c523797", 00:05:41.126 "zoned": false 00:05:41.126 } 00:05:41.126 ]' 00:05:41.126 07:56:52 -- rpc/rpc.sh@32 -- # jq length 00:05:41.126 07:56:52 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:41.126 07:56:52 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:41.126 07:56:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.126 07:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.126 07:56:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.126 07:56:52 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:41.126 07:56:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.126 07:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.126 07:56:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.126 07:56:52 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:41.126 07:56:52 -- rpc/rpc.sh@36 -- # jq length 00:05:41.384 ************************************ 00:05:41.384 END TEST rpc_plugins 00:05:41.384 ************************************ 00:05:41.384 07:56:52 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:41.384 00:05:41.384 real 0m0.167s 00:05:41.384 user 0m0.113s 00:05:41.384 sys 0m0.015s 00:05:41.384 07:56:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.384 07:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.384 07:56:52 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:41.384 07:56:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.384 07:56:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.384 07:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.384 ************************************ 00:05:41.384 START TEST rpc_trace_cmd_test 00:05:41.384 ************************************ 00:05:41.384 07:56:52 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:41.384 07:56:52 -- rpc/rpc.sh@40 -- # local info 00:05:41.384 07:56:52 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:41.384 07:56:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.384 07:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.384 07:56:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.384 07:56:52 -- rpc/rpc.sh@42 -- # info='{ 00:05:41.384 "bdev": { 00:05:41.384 "mask": "0x8", 00:05:41.384 "tpoint_mask": "0xffffffffffffffff" 00:05:41.384 }, 00:05:41.384 "bdev_nvme": { 00:05:41.384 "mask": "0x4000", 00:05:41.384 "tpoint_mask": "0x0" 00:05:41.384 }, 00:05:41.384 "blobfs": { 00:05:41.384 "mask": "0x80", 00:05:41.384 "tpoint_mask": "0x0" 00:05:41.384 }, 00:05:41.384 "dsa": { 00:05:41.384 "mask": "0x200", 00:05:41.384 "tpoint_mask": "0x0" 00:05:41.384 }, 00:05:41.384 "ftl": { 00:05:41.384 "mask": "0x40", 00:05:41.384 "tpoint_mask": "0x0" 00:05:41.384 }, 00:05:41.384 "iaa": { 00:05:41.384 "mask": "0x1000", 00:05:41.384 "tpoint_mask": "0x0" 00:05:41.384 }, 00:05:41.384 "iscsi_conn": { 00:05:41.384 "mask": "0x2", 00:05:41.384 "tpoint_mask": "0x0" 00:05:41.384 }, 00:05:41.384 "nvme_pcie": { 00:05:41.384 "mask": "0x800", 00:05:41.384 "tpoint_mask": "0x0" 00:05:41.384 }, 00:05:41.384 "nvme_tcp": { 00:05:41.384 "mask": "0x2000", 00:05:41.384 "tpoint_mask": "0x0" 00:05:41.384 }, 00:05:41.384 "nvmf_rdma": { 00:05:41.384 "mask": "0x10", 00:05:41.384 "tpoint_mask": "0x0" 00:05:41.384 }, 00:05:41.384 "nvmf_tcp": { 00:05:41.384 "mask": "0x20", 00:05:41.384 "tpoint_mask": "0x0" 00:05:41.384 }, 00:05:41.384 "scsi": { 00:05:41.384 "mask": "0x4", 00:05:41.384 "tpoint_mask": "0x0" 00:05:41.384 }, 00:05:41.384 "thread": { 00:05:41.384 "mask": "0x400", 00:05:41.384 "tpoint_mask": "0x0" 00:05:41.384 }, 00:05:41.384 "tpoint_group_mask": "0x8", 00:05:41.384 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid67587" 00:05:41.384 }' 00:05:41.384 07:56:52 -- rpc/rpc.sh@43 -- # jq length 00:05:41.384 07:56:52 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:41.384 07:56:52 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:41.384 07:56:52 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:41.384 07:56:52 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:41.384 07:56:52 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:41.384 07:56:52 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:41.643 07:56:52 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:41.643 07:56:52 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:41.643 ************************************ 00:05:41.643 END TEST rpc_trace_cmd_test 00:05:41.643 ************************************ 00:05:41.643 07:56:52 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:41.643 00:05:41.643 real 0m0.282s 00:05:41.643 user 0m0.250s 00:05:41.643 sys 0m0.023s 00:05:41.643 07:56:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.643 07:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.643 07:56:52 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:41.643 07:56:52 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:41.643 07:56:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.643 07:56:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.643 07:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.643 ************************************ 00:05:41.643 START TEST go_rpc 00:05:41.643 ************************************ 00:05:41.643 07:56:52 -- common/autotest_common.sh@1114 -- # go_rpc 00:05:41.643 07:56:52 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:41.643 07:56:52 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:41.643 07:56:52 -- rpc/rpc.sh@52 -- # jq length 00:05:41.643 07:56:52 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:41.643 07:56:52 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.643 07:56:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.643 07:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.643 07:56:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.643 07:56:52 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:41.643 07:56:52 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:41.643 07:56:52 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["29322701-286b-4069-aca9-f5776098219e"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"29322701-286b-4069-aca9-f5776098219e","zoned":false}]' 00:05:41.643 07:56:52 -- rpc/rpc.sh@57 -- # jq length 00:05:41.902 07:56:52 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:41.902 07:56:52 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:41.902 07:56:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.902 07:56:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.902 07:56:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.902 07:56:52 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:41.902 07:56:52 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:41.902 07:56:52 -- rpc/rpc.sh@61 -- # jq length 00:05:41.902 07:56:53 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:41.902 00:05:41.902 real 0m0.239s 00:05:41.902 user 0m0.162s 00:05:41.902 sys 0m0.036s 00:05:41.902 07:56:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.902 07:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:41.902 ************************************ 00:05:41.902 END TEST go_rpc 00:05:41.902 ************************************ 00:05:41.902 07:56:53 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:41.902 07:56:53 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:41.902 07:56:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.902 07:56:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.902 07:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:41.902 ************************************ 00:05:41.902 START TEST rpc_daemon_integrity 00:05:41.902 ************************************ 00:05:41.902 07:56:53 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:41.902 07:56:53 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:41.902 07:56:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.902 07:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:41.902 07:56:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.902 07:56:53 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:41.902 07:56:53 -- rpc/rpc.sh@13 -- # jq length 00:05:41.902 07:56:53 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:41.902 07:56:53 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.902 07:56:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.902 07:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:42.193 07:56:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.193 07:56:53 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:42.193 07:56:53 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.193 07:56:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.193 07:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:42.193 07:56:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.193 07:56:53 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.193 { 00:05:42.193 "aliases": [ 00:05:42.193 "5c3678d8-1059-4d31-b840-54179ed1f939" 00:05:42.193 ], 00:05:42.193 "assigned_rate_limits": { 00:05:42.193 "r_mbytes_per_sec": 0, 00:05:42.193 "rw_ios_per_sec": 0, 00:05:42.193 "rw_mbytes_per_sec": 0, 00:05:42.193 "w_mbytes_per_sec": 0 00:05:42.193 }, 00:05:42.193 "block_size": 512, 00:05:42.193 "claimed": false, 00:05:42.193 "driver_specific": {}, 00:05:42.194 "memory_domains": [ 00:05:42.194 { 00:05:42.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.194 "dma_device_type": 2 00:05:42.194 } 00:05:42.194 ], 00:05:42.194 "name": "Malloc3", 00:05:42.194 "num_blocks": 16384, 00:05:42.194 "product_name": "Malloc disk", 00:05:42.194 "supported_io_types": { 00:05:42.194 "abort": true, 00:05:42.194 "compare": false, 00:05:42.194 "compare_and_write": false, 00:05:42.194 "flush": true, 00:05:42.194 "nvme_admin": false, 00:05:42.194 "nvme_io": false, 00:05:42.194 "read": true, 00:05:42.194 "reset": true, 00:05:42.194 "unmap": true, 00:05:42.194 "write": true, 00:05:42.194 "write_zeroes": true 00:05:42.194 }, 00:05:42.194 "uuid": "5c3678d8-1059-4d31-b840-54179ed1f939", 00:05:42.194 "zoned": false 00:05:42.194 } 00:05:42.194 ]' 00:05:42.194 07:56:53 -- rpc/rpc.sh@17 -- # jq length 00:05:42.194 07:56:53 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.194 07:56:53 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:42.194 07:56:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.194 07:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:42.194 [2024-12-07 07:56:53.252585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:42.194 [2024-12-07 07:56:53.252702] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.194 [2024-12-07 07:56:53.252722] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1001990 00:05:42.194 [2024-12-07 07:56:53.252731] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.194 [2024-12-07 07:56:53.254350] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.194 [2024-12-07 07:56:53.254385] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.194 Passthru0 00:05:42.194 07:56:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.194 07:56:53 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.194 07:56:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.194 07:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:42.194 07:56:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.194 07:56:53 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.194 { 00:05:42.194 "aliases": [ 00:05:42.194 "5c3678d8-1059-4d31-b840-54179ed1f939" 00:05:42.194 ], 00:05:42.194 "assigned_rate_limits": { 00:05:42.194 "r_mbytes_per_sec": 0, 00:05:42.194 "rw_ios_per_sec": 0, 00:05:42.194 "rw_mbytes_per_sec": 0, 00:05:42.194 "w_mbytes_per_sec": 0 00:05:42.194 }, 00:05:42.194 "block_size": 512, 00:05:42.194 "claim_type": "exclusive_write", 00:05:42.194 "claimed": true, 00:05:42.194 "driver_specific": {}, 00:05:42.194 "memory_domains": [ 00:05:42.194 { 00:05:42.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.194 "dma_device_type": 2 00:05:42.194 } 00:05:42.194 ], 00:05:42.194 "name": "Malloc3", 00:05:42.194 "num_blocks": 16384, 00:05:42.194 "product_name": "Malloc disk", 00:05:42.194 "supported_io_types": { 00:05:42.194 "abort": true, 00:05:42.194 "compare": false, 00:05:42.194 "compare_and_write": false, 00:05:42.194 "flush": true, 00:05:42.194 "nvme_admin": false, 00:05:42.194 "nvme_io": false, 00:05:42.194 "read": true, 00:05:42.194 "reset": true, 00:05:42.194 "unmap": true, 00:05:42.194 "write": true, 00:05:42.194 "write_zeroes": true 00:05:42.194 }, 00:05:42.194 "uuid": "5c3678d8-1059-4d31-b840-54179ed1f939", 00:05:42.194 "zoned": false 00:05:42.194 }, 00:05:42.194 { 00:05:42.194 "aliases": [ 00:05:42.194 "fac0debf-ddba-5115-8c5a-0b47d2373471" 00:05:42.194 ], 00:05:42.194 "assigned_rate_limits": { 00:05:42.194 "r_mbytes_per_sec": 0, 00:05:42.194 "rw_ios_per_sec": 0, 00:05:42.194 "rw_mbytes_per_sec": 0, 00:05:42.194 "w_mbytes_per_sec": 0 00:05:42.194 }, 00:05:42.194 "block_size": 512, 00:05:42.194 "claimed": false, 00:05:42.194 "driver_specific": { 00:05:42.194 "passthru": { 00:05:42.194 "base_bdev_name": "Malloc3", 00:05:42.194 "name": "Passthru0" 00:05:42.194 } 00:05:42.194 }, 00:05:42.194 "memory_domains": [ 00:05:42.194 { 00:05:42.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.194 "dma_device_type": 2 00:05:42.194 } 00:05:42.194 ], 00:05:42.194 "name": "Passthru0", 00:05:42.194 "num_blocks": 16384, 00:05:42.194 "product_name": "passthru", 00:05:42.194 "supported_io_types": { 00:05:42.194 "abort": true, 00:05:42.194 "compare": false, 00:05:42.194 "compare_and_write": false, 00:05:42.194 "flush": true, 00:05:42.194 "nvme_admin": false, 00:05:42.194 "nvme_io": false, 00:05:42.194 "read": true, 00:05:42.194 "reset": true, 00:05:42.194 "unmap": true, 00:05:42.194 "write": true, 00:05:42.194 "write_zeroes": true 00:05:42.194 }, 00:05:42.194 "uuid": "fac0debf-ddba-5115-8c5a-0b47d2373471", 00:05:42.194 "zoned": false 00:05:42.194 } 00:05:42.194 ]' 00:05:42.194 07:56:53 -- rpc/rpc.sh@21 -- # jq length 00:05:42.194 07:56:53 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.194 07:56:53 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.194 07:56:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.194 07:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:42.194 07:56:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.194 07:56:53 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:42.194 07:56:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.194 07:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:42.194 07:56:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.194 07:56:53 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.194 07:56:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.194 07:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:42.194 07:56:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.194 07:56:53 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.194 07:56:53 -- rpc/rpc.sh@26 -- # jq length 00:05:42.194 ************************************ 00:05:42.194 END TEST rpc_daemon_integrity 00:05:42.194 ************************************ 00:05:42.194 07:56:53 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.194 00:05:42.194 real 0m0.321s 00:05:42.194 user 0m0.213s 00:05:42.194 sys 0m0.039s 00:05:42.195 07:56:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.195 07:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:42.454 07:56:53 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:42.454 07:56:53 -- rpc/rpc.sh@84 -- # killprocess 67587 00:05:42.454 07:56:53 -- common/autotest_common.sh@936 -- # '[' -z 67587 ']' 00:05:42.454 07:56:53 -- common/autotest_common.sh@940 -- # kill -0 67587 00:05:42.454 07:56:53 -- common/autotest_common.sh@941 -- # uname 00:05:42.454 07:56:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:42.454 07:56:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67587 00:05:42.454 killing process with pid 67587 00:05:42.454 07:56:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:42.454 07:56:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:42.454 07:56:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67587' 00:05:42.454 07:56:53 -- common/autotest_common.sh@955 -- # kill 67587 00:05:42.454 07:56:53 -- common/autotest_common.sh@960 -- # wait 67587 00:05:42.712 00:05:42.712 real 0m3.263s 00:05:42.712 user 0m4.328s 00:05:42.712 sys 0m0.761s 00:05:42.712 07:56:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.712 ************************************ 00:05:42.712 END TEST rpc 00:05:42.712 07:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:42.712 ************************************ 00:05:42.712 07:56:53 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:42.712 07:56:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.712 07:56:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.712 07:56:53 -- common/autotest_common.sh@10 -- # set +x 00:05:42.712 ************************************ 00:05:42.712 START TEST rpc_client 00:05:42.712 ************************************ 00:05:42.712 07:56:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:42.969 * Looking for test storage... 00:05:42.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:42.969 07:56:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:42.969 07:56:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:42.969 07:56:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:42.969 07:56:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:42.969 07:56:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:42.969 07:56:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:42.969 07:56:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:42.969 07:56:54 -- scripts/common.sh@335 -- # IFS=.-: 00:05:42.969 07:56:54 -- scripts/common.sh@335 -- # read -ra ver1 00:05:42.969 07:56:54 -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.969 07:56:54 -- scripts/common.sh@336 -- # read -ra ver2 00:05:42.969 07:56:54 -- scripts/common.sh@337 -- # local 'op=<' 00:05:42.969 07:56:54 -- scripts/common.sh@339 -- # ver1_l=2 00:05:42.969 07:56:54 -- scripts/common.sh@340 -- # ver2_l=1 00:05:42.969 07:56:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:42.969 07:56:54 -- scripts/common.sh@343 -- # case "$op" in 00:05:42.969 07:56:54 -- scripts/common.sh@344 -- # : 1 00:05:42.969 07:56:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:42.969 07:56:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.969 07:56:54 -- scripts/common.sh@364 -- # decimal 1 00:05:42.969 07:56:54 -- scripts/common.sh@352 -- # local d=1 00:05:42.969 07:56:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.969 07:56:54 -- scripts/common.sh@354 -- # echo 1 00:05:42.969 07:56:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:42.969 07:56:54 -- scripts/common.sh@365 -- # decimal 2 00:05:42.969 07:56:54 -- scripts/common.sh@352 -- # local d=2 00:05:42.969 07:56:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.969 07:56:54 -- scripts/common.sh@354 -- # echo 2 00:05:42.969 07:56:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:42.969 07:56:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:42.969 07:56:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:42.969 07:56:54 -- scripts/common.sh@367 -- # return 0 00:05:42.969 07:56:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.969 07:56:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:42.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.969 --rc genhtml_branch_coverage=1 00:05:42.969 --rc genhtml_function_coverage=1 00:05:42.969 --rc genhtml_legend=1 00:05:42.969 --rc geninfo_all_blocks=1 00:05:42.969 --rc geninfo_unexecuted_blocks=1 00:05:42.969 00:05:42.969 ' 00:05:42.969 07:56:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:42.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.969 --rc genhtml_branch_coverage=1 00:05:42.969 --rc genhtml_function_coverage=1 00:05:42.969 --rc genhtml_legend=1 00:05:42.969 --rc geninfo_all_blocks=1 00:05:42.969 --rc geninfo_unexecuted_blocks=1 00:05:42.969 00:05:42.969 ' 00:05:42.969 07:56:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:42.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.969 --rc genhtml_branch_coverage=1 00:05:42.969 --rc genhtml_function_coverage=1 00:05:42.969 --rc genhtml_legend=1 00:05:42.969 --rc geninfo_all_blocks=1 00:05:42.969 --rc geninfo_unexecuted_blocks=1 00:05:42.969 00:05:42.969 ' 00:05:42.969 07:56:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:42.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.969 --rc genhtml_branch_coverage=1 00:05:42.969 --rc genhtml_function_coverage=1 00:05:42.969 --rc genhtml_legend=1 00:05:42.969 --rc geninfo_all_blocks=1 00:05:42.969 --rc geninfo_unexecuted_blocks=1 00:05:42.969 00:05:42.969 ' 00:05:42.969 07:56:54 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:42.969 OK 00:05:42.969 07:56:54 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:42.969 00:05:42.969 real 0m0.189s 00:05:42.969 user 0m0.108s 00:05:42.969 sys 0m0.090s 00:05:42.969 ************************************ 00:05:42.969 END TEST rpc_client 00:05:42.969 ************************************ 00:05:42.969 07:56:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.969 07:56:54 -- common/autotest_common.sh@10 -- # set +x 00:05:42.969 07:56:54 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:42.969 07:56:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.969 07:56:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.969 07:56:54 -- common/autotest_common.sh@10 -- # set +x 00:05:42.969 ************************************ 00:05:42.969 START TEST json_config 00:05:42.969 ************************************ 00:05:42.969 07:56:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:42.969 07:56:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:42.969 07:56:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:42.969 07:56:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:43.227 07:56:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:43.227 07:56:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:43.227 07:56:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:43.227 07:56:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:43.227 07:56:54 -- scripts/common.sh@335 -- # IFS=.-: 00:05:43.227 07:56:54 -- scripts/common.sh@335 -- # read -ra ver1 00:05:43.227 07:56:54 -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.227 07:56:54 -- scripts/common.sh@336 -- # read -ra ver2 00:05:43.227 07:56:54 -- scripts/common.sh@337 -- # local 'op=<' 00:05:43.227 07:56:54 -- scripts/common.sh@339 -- # ver1_l=2 00:05:43.227 07:56:54 -- scripts/common.sh@340 -- # ver2_l=1 00:05:43.227 07:56:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:43.227 07:56:54 -- scripts/common.sh@343 -- # case "$op" in 00:05:43.227 07:56:54 -- scripts/common.sh@344 -- # : 1 00:05:43.227 07:56:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:43.227 07:56:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.227 07:56:54 -- scripts/common.sh@364 -- # decimal 1 00:05:43.227 07:56:54 -- scripts/common.sh@352 -- # local d=1 00:05:43.227 07:56:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.227 07:56:54 -- scripts/common.sh@354 -- # echo 1 00:05:43.227 07:56:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:43.227 07:56:54 -- scripts/common.sh@365 -- # decimal 2 00:05:43.227 07:56:54 -- scripts/common.sh@352 -- # local d=2 00:05:43.227 07:56:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.227 07:56:54 -- scripts/common.sh@354 -- # echo 2 00:05:43.227 07:56:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:43.227 07:56:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:43.227 07:56:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:43.227 07:56:54 -- scripts/common.sh@367 -- # return 0 00:05:43.227 07:56:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.227 07:56:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:43.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.227 --rc genhtml_branch_coverage=1 00:05:43.227 --rc genhtml_function_coverage=1 00:05:43.227 --rc genhtml_legend=1 00:05:43.227 --rc geninfo_all_blocks=1 00:05:43.227 --rc geninfo_unexecuted_blocks=1 00:05:43.227 00:05:43.227 ' 00:05:43.227 07:56:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:43.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.227 --rc genhtml_branch_coverage=1 00:05:43.227 --rc genhtml_function_coverage=1 00:05:43.227 --rc genhtml_legend=1 00:05:43.227 --rc geninfo_all_blocks=1 00:05:43.227 --rc geninfo_unexecuted_blocks=1 00:05:43.227 00:05:43.227 ' 00:05:43.227 07:56:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:43.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.227 --rc genhtml_branch_coverage=1 00:05:43.227 --rc genhtml_function_coverage=1 00:05:43.227 --rc genhtml_legend=1 00:05:43.227 --rc geninfo_all_blocks=1 00:05:43.227 --rc geninfo_unexecuted_blocks=1 00:05:43.227 00:05:43.227 ' 00:05:43.227 07:56:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:43.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.227 --rc genhtml_branch_coverage=1 00:05:43.227 --rc genhtml_function_coverage=1 00:05:43.227 --rc genhtml_legend=1 00:05:43.227 --rc geninfo_all_blocks=1 00:05:43.227 --rc geninfo_unexecuted_blocks=1 00:05:43.227 00:05:43.228 ' 00:05:43.228 07:56:54 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:43.228 07:56:54 -- nvmf/common.sh@7 -- # uname -s 00:05:43.228 07:56:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.228 07:56:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.228 07:56:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.228 07:56:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.228 07:56:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.228 07:56:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.228 07:56:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.228 07:56:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.228 07:56:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.228 07:56:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.228 07:56:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:05:43.228 07:56:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:05:43.228 07:56:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.228 07:56:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.228 07:56:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.228 07:56:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:43.228 07:56:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.228 07:56:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.228 07:56:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.228 07:56:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.228 07:56:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.228 07:56:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.228 07:56:54 -- paths/export.sh@5 -- # export PATH 00:05:43.228 07:56:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.228 07:56:54 -- nvmf/common.sh@46 -- # : 0 00:05:43.228 07:56:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:43.228 07:56:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:43.228 07:56:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:43.228 07:56:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.228 07:56:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.228 07:56:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:43.228 07:56:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:43.228 07:56:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:43.228 07:56:54 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:43.228 07:56:54 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:43.228 07:56:54 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:43.228 07:56:54 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:43.228 07:56:54 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:43.228 07:56:54 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:43.228 07:56:54 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:43.228 07:56:54 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:43.228 07:56:54 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:43.228 07:56:54 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:43.228 07:56:54 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:43.228 07:56:54 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:43.228 07:56:54 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:43.228 07:56:54 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:43.228 07:56:54 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:43.228 INFO: JSON configuration test init 00:05:43.228 07:56:54 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:43.228 07:56:54 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:43.228 07:56:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:43.228 07:56:54 -- common/autotest_common.sh@10 -- # set +x 00:05:43.228 07:56:54 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:43.228 07:56:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:43.228 07:56:54 -- common/autotest_common.sh@10 -- # set +x 00:05:43.228 07:56:54 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:43.228 07:56:54 -- json_config/json_config.sh@98 -- # local app=target 00:05:43.228 07:56:54 -- json_config/json_config.sh@99 -- # shift 00:05:43.228 07:56:54 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:43.228 Waiting for target to run... 00:05:43.228 07:56:54 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:43.228 07:56:54 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:43.228 07:56:54 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:43.228 07:56:54 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:43.228 07:56:54 -- json_config/json_config.sh@111 -- # app_pid[$app]=67914 00:05:43.228 07:56:54 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:43.228 07:56:54 -- json_config/json_config.sh@114 -- # waitforlisten 67914 /var/tmp/spdk_tgt.sock 00:05:43.228 07:56:54 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:43.228 07:56:54 -- common/autotest_common.sh@829 -- # '[' -z 67914 ']' 00:05:43.228 07:56:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.228 07:56:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.228 07:56:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.228 07:56:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.228 07:56:54 -- common/autotest_common.sh@10 -- # set +x 00:05:43.228 [2024-12-07 07:56:54.426270] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:43.228 [2024-12-07 07:56:54.426618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67914 ] 00:05:43.794 [2024-12-07 07:56:54.854441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.794 [2024-12-07 07:56:54.911980] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.794 [2024-12-07 07:56:54.912462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.359 07:56:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.359 00:05:44.359 07:56:55 -- common/autotest_common.sh@862 -- # return 0 00:05:44.359 07:56:55 -- json_config/json_config.sh@115 -- # echo '' 00:05:44.359 07:56:55 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:44.359 07:56:55 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:44.359 07:56:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:44.359 07:56:55 -- common/autotest_common.sh@10 -- # set +x 00:05:44.360 07:56:55 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:44.360 07:56:55 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:44.360 07:56:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:44.360 07:56:55 -- common/autotest_common.sh@10 -- # set +x 00:05:44.360 07:56:55 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:44.360 07:56:55 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:44.360 07:56:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:44.926 07:56:55 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:44.926 07:56:55 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:44.926 07:56:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:44.926 07:56:55 -- common/autotest_common.sh@10 -- # set +x 00:05:44.926 07:56:55 -- json_config/json_config.sh@48 -- # local ret=0 00:05:44.926 07:56:55 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:44.926 07:56:55 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:44.926 07:56:55 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:44.926 07:56:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:44.926 07:56:55 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:45.185 07:56:56 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:45.185 07:56:56 -- json_config/json_config.sh@51 -- # local get_types 00:05:45.185 07:56:56 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:45.185 07:56:56 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:45.185 07:56:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.185 07:56:56 -- common/autotest_common.sh@10 -- # set +x 00:05:45.185 07:56:56 -- json_config/json_config.sh@58 -- # return 0 00:05:45.185 07:56:56 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:45.185 07:56:56 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:45.185 07:56:56 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:45.185 07:56:56 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:45.185 07:56:56 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:45.185 07:56:56 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:45.185 07:56:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:45.185 07:56:56 -- common/autotest_common.sh@10 -- # set +x 00:05:45.185 07:56:56 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:45.185 07:56:56 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:45.185 07:56:56 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:45.185 07:56:56 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:45.185 07:56:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:45.443 MallocForNvmf0 00:05:45.443 07:56:56 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:45.443 07:56:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:45.701 MallocForNvmf1 00:05:45.701 07:56:56 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:45.701 07:56:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:45.960 [2024-12-07 07:56:57.073206] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:45.960 07:56:57 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:45.960 07:56:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:46.219 07:56:57 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:46.219 07:56:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:46.478 07:56:57 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:46.478 07:56:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:46.737 07:56:57 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:46.737 07:56:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:46.996 [2024-12-07 07:56:58.061771] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:46.996 07:56:58 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:46.996 07:56:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:46.996 07:56:58 -- common/autotest_common.sh@10 -- # set +x 00:05:46.996 07:56:58 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:46.996 07:56:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:46.996 07:56:58 -- common/autotest_common.sh@10 -- # set +x 00:05:46.996 07:56:58 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:46.996 07:56:58 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:46.996 07:56:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:47.255 MallocBdevForConfigChangeCheck 00:05:47.255 07:56:58 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:47.255 07:56:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:47.255 07:56:58 -- common/autotest_common.sh@10 -- # set +x 00:05:47.255 07:56:58 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:47.255 07:56:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:47.822 INFO: shutting down applications... 00:05:47.822 07:56:58 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:47.822 07:56:58 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:47.822 07:56:58 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:47.822 07:56:58 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:47.822 07:56:58 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:47.822 Calling clear_iscsi_subsystem 00:05:47.822 Calling clear_nvmf_subsystem 00:05:47.822 Calling clear_nbd_subsystem 00:05:47.822 Calling clear_ublk_subsystem 00:05:47.822 Calling clear_vhost_blk_subsystem 00:05:47.822 Calling clear_vhost_scsi_subsystem 00:05:47.822 Calling clear_scheduler_subsystem 00:05:47.822 Calling clear_bdev_subsystem 00:05:47.822 Calling clear_accel_subsystem 00:05:47.822 Calling clear_vmd_subsystem 00:05:47.822 Calling clear_sock_subsystem 00:05:47.822 Calling clear_iobuf_subsystem 00:05:47.822 07:56:59 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:47.823 07:56:59 -- json_config/json_config.sh@396 -- # count=100 00:05:47.823 07:56:59 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:47.823 07:56:59 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:47.823 07:56:59 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:47.823 07:56:59 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:48.391 07:56:59 -- json_config/json_config.sh@398 -- # break 00:05:48.391 07:56:59 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:48.391 07:56:59 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:48.391 07:56:59 -- json_config/json_config.sh@120 -- # local app=target 00:05:48.391 07:56:59 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:48.391 07:56:59 -- json_config/json_config.sh@124 -- # [[ -n 67914 ]] 00:05:48.391 07:56:59 -- json_config/json_config.sh@127 -- # kill -SIGINT 67914 00:05:48.391 07:56:59 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:48.391 07:56:59 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:48.391 07:56:59 -- json_config/json_config.sh@130 -- # kill -0 67914 00:05:48.391 07:56:59 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:48.958 07:56:59 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:48.958 07:56:59 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:48.958 07:56:59 -- json_config/json_config.sh@130 -- # kill -0 67914 00:05:48.958 07:56:59 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:48.958 07:56:59 -- json_config/json_config.sh@132 -- # break 00:05:48.958 SPDK target shutdown done 00:05:48.958 07:56:59 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:48.958 07:56:59 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:48.958 INFO: relaunching applications... 00:05:48.958 07:56:59 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:48.958 07:56:59 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:48.958 07:56:59 -- json_config/json_config.sh@98 -- # local app=target 00:05:48.958 07:56:59 -- json_config/json_config.sh@99 -- # shift 00:05:48.958 07:56:59 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:48.958 07:56:59 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:48.958 07:56:59 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:48.958 07:56:59 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:48.958 07:56:59 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:48.958 07:56:59 -- json_config/json_config.sh@111 -- # app_pid[$app]=68183 00:05:48.958 07:56:59 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:48.958 Waiting for target to run... 00:05:48.958 07:56:59 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:48.958 07:56:59 -- json_config/json_config.sh@114 -- # waitforlisten 68183 /var/tmp/spdk_tgt.sock 00:05:48.958 07:56:59 -- common/autotest_common.sh@829 -- # '[' -z 68183 ']' 00:05:48.958 07:56:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:48.958 07:56:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:48.958 07:56:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:48.958 07:56:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.958 07:56:59 -- common/autotest_common.sh@10 -- # set +x 00:05:48.958 [2024-12-07 07:57:00.032174] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:48.958 [2024-12-07 07:57:00.032296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68183 ] 00:05:49.217 [2024-12-07 07:57:00.464148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.476 [2024-12-07 07:57:00.520160] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:49.476 [2024-12-07 07:57:00.520354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.735 [2024-12-07 07:57:00.816398] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.735 [2024-12-07 07:57:00.848481] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:49.993 07:57:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.993 07:57:01 -- common/autotest_common.sh@862 -- # return 0 00:05:49.993 07:57:01 -- json_config/json_config.sh@115 -- # echo '' 00:05:49.993 00:05:49.993 07:57:01 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:49.993 INFO: Checking if target configuration is the same... 00:05:49.993 07:57:01 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:49.993 07:57:01 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:49.993 07:57:01 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:49.993 07:57:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:49.993 + '[' 2 -ne 2 ']' 00:05:49.993 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:49.993 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:49.994 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:49.994 +++ basename /dev/fd/62 00:05:49.994 ++ mktemp /tmp/62.XXX 00:05:49.994 + tmp_file_1=/tmp/62.9Zj 00:05:49.994 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:49.994 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:49.994 + tmp_file_2=/tmp/spdk_tgt_config.json.BwP 00:05:49.994 + ret=0 00:05:49.994 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:50.252 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:50.252 + diff -u /tmp/62.9Zj /tmp/spdk_tgt_config.json.BwP 00:05:50.252 + echo 'INFO: JSON config files are the same' 00:05:50.252 INFO: JSON config files are the same 00:05:50.252 + rm /tmp/62.9Zj /tmp/spdk_tgt_config.json.BwP 00:05:50.252 + exit 0 00:05:50.252 07:57:01 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:50.252 INFO: changing configuration and checking if this can be detected... 00:05:50.252 07:57:01 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:50.252 07:57:01 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:50.252 07:57:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:50.512 07:57:01 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:50.512 07:57:01 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:50.512 07:57:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.512 + '[' 2 -ne 2 ']' 00:05:50.512 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:50.512 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:50.512 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:50.512 +++ basename /dev/fd/62 00:05:50.512 ++ mktemp /tmp/62.XXX 00:05:50.512 + tmp_file_1=/tmp/62.H7t 00:05:50.512 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:50.770 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:50.770 + tmp_file_2=/tmp/spdk_tgt_config.json.PeV 00:05:50.770 + ret=0 00:05:50.770 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:51.029 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:51.029 + diff -u /tmp/62.H7t /tmp/spdk_tgt_config.json.PeV 00:05:51.029 + ret=1 00:05:51.029 + echo '=== Start of file: /tmp/62.H7t ===' 00:05:51.029 + cat /tmp/62.H7t 00:05:51.029 + echo '=== End of file: /tmp/62.H7t ===' 00:05:51.029 + echo '' 00:05:51.029 + echo '=== Start of file: /tmp/spdk_tgt_config.json.PeV ===' 00:05:51.029 + cat /tmp/spdk_tgt_config.json.PeV 00:05:51.029 + echo '=== End of file: /tmp/spdk_tgt_config.json.PeV ===' 00:05:51.029 + echo '' 00:05:51.029 + rm /tmp/62.H7t /tmp/spdk_tgt_config.json.PeV 00:05:51.029 + exit 1 00:05:51.029 INFO: configuration change detected. 00:05:51.029 07:57:02 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:51.029 07:57:02 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:51.029 07:57:02 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:51.029 07:57:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:51.029 07:57:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.029 07:57:02 -- json_config/json_config.sh@360 -- # local ret=0 00:05:51.029 07:57:02 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:51.029 07:57:02 -- json_config/json_config.sh@370 -- # [[ -n 68183 ]] 00:05:51.029 07:57:02 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:51.029 07:57:02 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:51.029 07:57:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:51.029 07:57:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.029 07:57:02 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:51.029 07:57:02 -- json_config/json_config.sh@246 -- # uname -s 00:05:51.029 07:57:02 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:51.029 07:57:02 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:51.029 07:57:02 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:51.029 07:57:02 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:51.029 07:57:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:51.029 07:57:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.029 07:57:02 -- json_config/json_config.sh@376 -- # killprocess 68183 00:05:51.029 07:57:02 -- common/autotest_common.sh@936 -- # '[' -z 68183 ']' 00:05:51.029 07:57:02 -- common/autotest_common.sh@940 -- # kill -0 68183 00:05:51.029 07:57:02 -- common/autotest_common.sh@941 -- # uname 00:05:51.029 07:57:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:51.029 07:57:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68183 00:05:51.029 07:57:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:51.029 killing process with pid 68183 00:05:51.029 07:57:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:51.029 07:57:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68183' 00:05:51.029 07:57:02 -- common/autotest_common.sh@955 -- # kill 68183 00:05:51.029 07:57:02 -- common/autotest_common.sh@960 -- # wait 68183 00:05:51.289 07:57:02 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:51.289 07:57:02 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:51.289 07:57:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:51.289 07:57:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.289 07:57:02 -- json_config/json_config.sh@381 -- # return 0 00:05:51.289 INFO: Success 00:05:51.289 07:57:02 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:51.289 00:05:51.289 real 0m8.380s 00:05:51.289 user 0m11.816s 00:05:51.289 sys 0m1.907s 00:05:51.289 07:57:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:51.289 ************************************ 00:05:51.289 END TEST json_config 00:05:51.289 ************************************ 00:05:51.289 07:57:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.548 07:57:02 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:51.548 07:57:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:51.548 07:57:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.548 07:57:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.548 ************************************ 00:05:51.548 START TEST json_config_extra_key 00:05:51.548 ************************************ 00:05:51.548 07:57:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:51.548 07:57:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:51.548 07:57:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:51.548 07:57:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:51.548 07:57:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:51.548 07:57:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:51.548 07:57:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:51.548 07:57:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:51.548 07:57:02 -- scripts/common.sh@335 -- # IFS=.-: 00:05:51.548 07:57:02 -- scripts/common.sh@335 -- # read -ra ver1 00:05:51.548 07:57:02 -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.548 07:57:02 -- scripts/common.sh@336 -- # read -ra ver2 00:05:51.548 07:57:02 -- scripts/common.sh@337 -- # local 'op=<' 00:05:51.548 07:57:02 -- scripts/common.sh@339 -- # ver1_l=2 00:05:51.548 07:57:02 -- scripts/common.sh@340 -- # ver2_l=1 00:05:51.548 07:57:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:51.548 07:57:02 -- scripts/common.sh@343 -- # case "$op" in 00:05:51.549 07:57:02 -- scripts/common.sh@344 -- # : 1 00:05:51.549 07:57:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:51.549 07:57:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.549 07:57:02 -- scripts/common.sh@364 -- # decimal 1 00:05:51.549 07:57:02 -- scripts/common.sh@352 -- # local d=1 00:05:51.549 07:57:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.549 07:57:02 -- scripts/common.sh@354 -- # echo 1 00:05:51.549 07:57:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:51.549 07:57:02 -- scripts/common.sh@365 -- # decimal 2 00:05:51.549 07:57:02 -- scripts/common.sh@352 -- # local d=2 00:05:51.549 07:57:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.549 07:57:02 -- scripts/common.sh@354 -- # echo 2 00:05:51.549 07:57:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:51.549 07:57:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:51.549 07:57:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:51.549 07:57:02 -- scripts/common.sh@367 -- # return 0 00:05:51.549 07:57:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.549 07:57:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:51.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.549 --rc genhtml_branch_coverage=1 00:05:51.549 --rc genhtml_function_coverage=1 00:05:51.549 --rc genhtml_legend=1 00:05:51.549 --rc geninfo_all_blocks=1 00:05:51.549 --rc geninfo_unexecuted_blocks=1 00:05:51.549 00:05:51.549 ' 00:05:51.549 07:57:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:51.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.549 --rc genhtml_branch_coverage=1 00:05:51.549 --rc genhtml_function_coverage=1 00:05:51.549 --rc genhtml_legend=1 00:05:51.549 --rc geninfo_all_blocks=1 00:05:51.549 --rc geninfo_unexecuted_blocks=1 00:05:51.549 00:05:51.549 ' 00:05:51.549 07:57:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:51.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.549 --rc genhtml_branch_coverage=1 00:05:51.549 --rc genhtml_function_coverage=1 00:05:51.549 --rc genhtml_legend=1 00:05:51.549 --rc geninfo_all_blocks=1 00:05:51.549 --rc geninfo_unexecuted_blocks=1 00:05:51.549 00:05:51.549 ' 00:05:51.549 07:57:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:51.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.549 --rc genhtml_branch_coverage=1 00:05:51.549 --rc genhtml_function_coverage=1 00:05:51.549 --rc genhtml_legend=1 00:05:51.549 --rc geninfo_all_blocks=1 00:05:51.549 --rc geninfo_unexecuted_blocks=1 00:05:51.549 00:05:51.549 ' 00:05:51.549 07:57:02 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:51.549 07:57:02 -- nvmf/common.sh@7 -- # uname -s 00:05:51.549 07:57:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.549 07:57:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.549 07:57:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.549 07:57:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.549 07:57:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.549 07:57:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.549 07:57:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.549 07:57:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.549 07:57:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.549 07:57:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.549 07:57:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:05:51.549 07:57:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:05:51.549 07:57:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.549 07:57:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.549 07:57:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:51.549 07:57:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:51.549 07:57:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.549 07:57:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.549 07:57:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.549 07:57:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.549 07:57:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.549 07:57:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.549 07:57:02 -- paths/export.sh@5 -- # export PATH 00:05:51.549 07:57:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.549 07:57:02 -- nvmf/common.sh@46 -- # : 0 00:05:51.549 07:57:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:51.549 07:57:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:51.549 07:57:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:51.549 07:57:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.549 07:57:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.549 07:57:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:51.549 07:57:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:51.549 07:57:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:51.549 07:57:02 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:51.549 07:57:02 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:51.549 07:57:02 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:51.549 07:57:02 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:51.549 07:57:02 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:51.549 07:57:02 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:51.549 07:57:02 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:51.549 07:57:02 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:51.549 07:57:02 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:51.549 INFO: launching applications... 00:05:51.549 07:57:02 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:51.549 07:57:02 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:51.549 07:57:02 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:51.549 07:57:02 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:51.549 07:57:02 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:51.549 07:57:02 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:51.549 07:57:02 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68366 00:05:51.549 07:57:02 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:51.549 Waiting for target to run... 00:05:51.549 07:57:02 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:51.549 07:57:02 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68366 /var/tmp/spdk_tgt.sock 00:05:51.549 07:57:02 -- common/autotest_common.sh@829 -- # '[' -z 68366 ']' 00:05:51.549 07:57:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:51.549 07:57:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:51.549 07:57:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:51.549 07:57:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.549 07:57:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.808 [2024-12-07 07:57:02.841917] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:51.808 [2024-12-07 07:57:02.842019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68366 ] 00:05:52.067 [2024-12-07 07:57:03.277789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.067 [2024-12-07 07:57:03.340163] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:52.067 [2024-12-07 07:57:03.340352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.635 07:57:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.635 07:57:03 -- common/autotest_common.sh@862 -- # return 0 00:05:52.635 00:05:52.635 07:57:03 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:52.635 INFO: shutting down applications... 00:05:52.635 07:57:03 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:52.635 07:57:03 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:52.635 07:57:03 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:52.635 07:57:03 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:52.635 07:57:03 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68366 ]] 00:05:52.635 07:57:03 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68366 00:05:52.635 07:57:03 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:52.635 07:57:03 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:52.635 07:57:03 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68366 00:05:52.635 07:57:03 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:53.203 07:57:04 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:53.203 07:57:04 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:53.203 07:57:04 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68366 00:05:53.203 07:57:04 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:53.203 07:57:04 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:53.203 07:57:04 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:53.203 SPDK target shutdown done 00:05:53.203 07:57:04 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:53.203 Success 00:05:53.203 07:57:04 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:53.203 00:05:53.203 real 0m1.758s 00:05:53.203 user 0m1.597s 00:05:53.203 sys 0m0.489s 00:05:53.203 07:57:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:53.203 ************************************ 00:05:53.203 07:57:04 -- common/autotest_common.sh@10 -- # set +x 00:05:53.203 END TEST json_config_extra_key 00:05:53.203 ************************************ 00:05:53.203 07:57:04 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:53.203 07:57:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.203 07:57:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.203 07:57:04 -- common/autotest_common.sh@10 -- # set +x 00:05:53.203 ************************************ 00:05:53.203 START TEST alias_rpc 00:05:53.203 ************************************ 00:05:53.203 07:57:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:53.203 * Looking for test storage... 00:05:53.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:53.463 07:57:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:53.463 07:57:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:53.463 07:57:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:53.463 07:57:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:53.463 07:57:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:53.463 07:57:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:53.463 07:57:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:53.463 07:57:04 -- scripts/common.sh@335 -- # IFS=.-: 00:05:53.463 07:57:04 -- scripts/common.sh@335 -- # read -ra ver1 00:05:53.463 07:57:04 -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.463 07:57:04 -- scripts/common.sh@336 -- # read -ra ver2 00:05:53.463 07:57:04 -- scripts/common.sh@337 -- # local 'op=<' 00:05:53.463 07:57:04 -- scripts/common.sh@339 -- # ver1_l=2 00:05:53.463 07:57:04 -- scripts/common.sh@340 -- # ver2_l=1 00:05:53.463 07:57:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:53.463 07:57:04 -- scripts/common.sh@343 -- # case "$op" in 00:05:53.463 07:57:04 -- scripts/common.sh@344 -- # : 1 00:05:53.463 07:57:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:53.463 07:57:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.463 07:57:04 -- scripts/common.sh@364 -- # decimal 1 00:05:53.463 07:57:04 -- scripts/common.sh@352 -- # local d=1 00:05:53.463 07:57:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.463 07:57:04 -- scripts/common.sh@354 -- # echo 1 00:05:53.463 07:57:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:53.463 07:57:04 -- scripts/common.sh@365 -- # decimal 2 00:05:53.463 07:57:04 -- scripts/common.sh@352 -- # local d=2 00:05:53.463 07:57:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.463 07:57:04 -- scripts/common.sh@354 -- # echo 2 00:05:53.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.463 07:57:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:53.463 07:57:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:53.463 07:57:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:53.463 07:57:04 -- scripts/common.sh@367 -- # return 0 00:05:53.463 07:57:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.463 07:57:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:53.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.463 --rc genhtml_branch_coverage=1 00:05:53.463 --rc genhtml_function_coverage=1 00:05:53.463 --rc genhtml_legend=1 00:05:53.463 --rc geninfo_all_blocks=1 00:05:53.463 --rc geninfo_unexecuted_blocks=1 00:05:53.463 00:05:53.463 ' 00:05:53.464 07:57:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:53.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.464 --rc genhtml_branch_coverage=1 00:05:53.464 --rc genhtml_function_coverage=1 00:05:53.464 --rc genhtml_legend=1 00:05:53.464 --rc geninfo_all_blocks=1 00:05:53.464 --rc geninfo_unexecuted_blocks=1 00:05:53.464 00:05:53.464 ' 00:05:53.464 07:57:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:53.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.464 --rc genhtml_branch_coverage=1 00:05:53.464 --rc genhtml_function_coverage=1 00:05:53.464 --rc genhtml_legend=1 00:05:53.464 --rc geninfo_all_blocks=1 00:05:53.464 --rc geninfo_unexecuted_blocks=1 00:05:53.464 00:05:53.464 ' 00:05:53.464 07:57:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:53.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.464 --rc genhtml_branch_coverage=1 00:05:53.464 --rc genhtml_function_coverage=1 00:05:53.464 --rc genhtml_legend=1 00:05:53.464 --rc geninfo_all_blocks=1 00:05:53.464 --rc geninfo_unexecuted_blocks=1 00:05:53.464 00:05:53.464 ' 00:05:53.464 07:57:04 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:53.464 07:57:04 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68455 00:05:53.464 07:57:04 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68455 00:05:53.464 07:57:04 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.464 07:57:04 -- common/autotest_common.sh@829 -- # '[' -z 68455 ']' 00:05:53.464 07:57:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.464 07:57:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.464 07:57:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.464 07:57:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.464 07:57:04 -- common/autotest_common.sh@10 -- # set +x 00:05:53.464 [2024-12-07 07:57:04.650896] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:53.464 [2024-12-07 07:57:04.651185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68455 ] 00:05:53.723 [2024-12-07 07:57:04.787603] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.723 [2024-12-07 07:57:04.846063] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:53.723 [2024-12-07 07:57:04.846509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.659 07:57:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.659 07:57:05 -- common/autotest_common.sh@862 -- # return 0 00:05:54.659 07:57:05 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:54.659 07:57:05 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68455 00:05:54.659 07:57:05 -- common/autotest_common.sh@936 -- # '[' -z 68455 ']' 00:05:54.659 07:57:05 -- common/autotest_common.sh@940 -- # kill -0 68455 00:05:54.659 07:57:05 -- common/autotest_common.sh@941 -- # uname 00:05:54.659 07:57:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.659 07:57:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68455 00:05:54.920 07:57:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.920 killing process with pid 68455 00:05:54.920 07:57:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.920 07:57:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68455' 00:05:54.920 07:57:05 -- common/autotest_common.sh@955 -- # kill 68455 00:05:54.920 07:57:05 -- common/autotest_common.sh@960 -- # wait 68455 00:05:55.201 ************************************ 00:05:55.201 END TEST alias_rpc 00:05:55.201 ************************************ 00:05:55.201 00:05:55.201 real 0m1.882s 00:05:55.201 user 0m2.145s 00:05:55.201 sys 0m0.439s 00:05:55.201 07:57:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:55.201 07:57:06 -- common/autotest_common.sh@10 -- # set +x 00:05:55.201 07:57:06 -- spdk/autotest.sh@169 -- # [[ 1 -eq 0 ]] 00:05:55.201 07:57:06 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:55.201 07:57:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.201 07:57:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.201 07:57:06 -- common/autotest_common.sh@10 -- # set +x 00:05:55.201 ************************************ 00:05:55.201 START TEST dpdk_mem_utility 00:05:55.201 ************************************ 00:05:55.201 07:57:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:55.201 * Looking for test storage... 00:05:55.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:55.201 07:57:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:55.201 07:57:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:55.201 07:57:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:55.467 07:57:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:55.467 07:57:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:55.467 07:57:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:55.467 07:57:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:55.467 07:57:06 -- scripts/common.sh@335 -- # IFS=.-: 00:05:55.467 07:57:06 -- scripts/common.sh@335 -- # read -ra ver1 00:05:55.467 07:57:06 -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.467 07:57:06 -- scripts/common.sh@336 -- # read -ra ver2 00:05:55.467 07:57:06 -- scripts/common.sh@337 -- # local 'op=<' 00:05:55.467 07:57:06 -- scripts/common.sh@339 -- # ver1_l=2 00:05:55.467 07:57:06 -- scripts/common.sh@340 -- # ver2_l=1 00:05:55.467 07:57:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:55.467 07:57:06 -- scripts/common.sh@343 -- # case "$op" in 00:05:55.467 07:57:06 -- scripts/common.sh@344 -- # : 1 00:05:55.467 07:57:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:55.467 07:57:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.467 07:57:06 -- scripts/common.sh@364 -- # decimal 1 00:05:55.467 07:57:06 -- scripts/common.sh@352 -- # local d=1 00:05:55.467 07:57:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.467 07:57:06 -- scripts/common.sh@354 -- # echo 1 00:05:55.467 07:57:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:55.467 07:57:06 -- scripts/common.sh@365 -- # decimal 2 00:05:55.467 07:57:06 -- scripts/common.sh@352 -- # local d=2 00:05:55.467 07:57:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.467 07:57:06 -- scripts/common.sh@354 -- # echo 2 00:05:55.467 07:57:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:55.467 07:57:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:55.467 07:57:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:55.467 07:57:06 -- scripts/common.sh@367 -- # return 0 00:05:55.467 07:57:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.467 07:57:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:55.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.467 --rc genhtml_branch_coverage=1 00:05:55.467 --rc genhtml_function_coverage=1 00:05:55.467 --rc genhtml_legend=1 00:05:55.467 --rc geninfo_all_blocks=1 00:05:55.467 --rc geninfo_unexecuted_blocks=1 00:05:55.467 00:05:55.467 ' 00:05:55.467 07:57:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:55.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.467 --rc genhtml_branch_coverage=1 00:05:55.467 --rc genhtml_function_coverage=1 00:05:55.467 --rc genhtml_legend=1 00:05:55.467 --rc geninfo_all_blocks=1 00:05:55.467 --rc geninfo_unexecuted_blocks=1 00:05:55.467 00:05:55.467 ' 00:05:55.467 07:57:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:55.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.467 --rc genhtml_branch_coverage=1 00:05:55.467 --rc genhtml_function_coverage=1 00:05:55.467 --rc genhtml_legend=1 00:05:55.467 --rc geninfo_all_blocks=1 00:05:55.467 --rc geninfo_unexecuted_blocks=1 00:05:55.467 00:05:55.467 ' 00:05:55.467 07:57:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:55.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.467 --rc genhtml_branch_coverage=1 00:05:55.468 --rc genhtml_function_coverage=1 00:05:55.468 --rc genhtml_legend=1 00:05:55.468 --rc geninfo_all_blocks=1 00:05:55.468 --rc geninfo_unexecuted_blocks=1 00:05:55.468 00:05:55.468 ' 00:05:55.468 07:57:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:55.468 07:57:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68554 00:05:55.468 07:57:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68554 00:05:55.468 07:57:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:55.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.468 07:57:06 -- common/autotest_common.sh@829 -- # '[' -z 68554 ']' 00:05:55.468 07:57:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.468 07:57:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.468 07:57:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.468 07:57:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.468 07:57:06 -- common/autotest_common.sh@10 -- # set +x 00:05:55.468 [2024-12-07 07:57:06.586870] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.468 [2024-12-07 07:57:06.587229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68554 ] 00:05:55.468 [2024-12-07 07:57:06.724749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.727 [2024-12-07 07:57:06.791322] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.727 [2024-12-07 07:57:06.791491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.293 07:57:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.293 07:57:07 -- common/autotest_common.sh@862 -- # return 0 00:05:56.293 07:57:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:56.293 07:57:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:56.293 07:57:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.293 07:57:07 -- common/autotest_common.sh@10 -- # set +x 00:05:56.293 { 00:05:56.293 "filename": "/tmp/spdk_mem_dump.txt" 00:05:56.293 } 00:05:56.293 07:57:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.293 07:57:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:56.553 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:56.553 1 heaps totaling size 814.000000 MiB 00:05:56.553 size: 814.000000 MiB heap id: 0 00:05:56.553 end heaps---------- 00:05:56.553 8 mempools totaling size 598.116089 MiB 00:05:56.553 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:56.553 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:56.553 size: 84.521057 MiB name: bdev_io_68554 00:05:56.553 size: 51.011292 MiB name: evtpool_68554 00:05:56.553 size: 50.003479 MiB name: msgpool_68554 00:05:56.553 size: 21.763794 MiB name: PDU_Pool 00:05:56.553 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:56.553 size: 0.026123 MiB name: Session_Pool 00:05:56.553 end mempools------- 00:05:56.553 6 memzones totaling size 4.142822 MiB 00:05:56.553 size: 1.000366 MiB name: RG_ring_0_68554 00:05:56.553 size: 1.000366 MiB name: RG_ring_1_68554 00:05:56.553 size: 1.000366 MiB name: RG_ring_4_68554 00:05:56.553 size: 1.000366 MiB name: RG_ring_5_68554 00:05:56.553 size: 0.125366 MiB name: RG_ring_2_68554 00:05:56.553 size: 0.015991 MiB name: RG_ring_3_68554 00:05:56.553 end memzones------- 00:05:56.553 07:57:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:56.553 heap id: 0 total size: 814.000000 MiB number of busy elements: 222 number of free elements: 15 00:05:56.553 list of free elements. size: 12.486206 MiB 00:05:56.553 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:56.553 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:56.553 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:56.553 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:56.553 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:56.553 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:56.553 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:56.553 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:56.553 element at address: 0x200000200000 with size: 0.837219 MiB 00:05:56.553 element at address: 0x20001aa00000 with size: 0.572266 MiB 00:05:56.553 element at address: 0x20000b200000 with size: 0.489441 MiB 00:05:56.553 element at address: 0x200000800000 with size: 0.486877 MiB 00:05:56.553 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:56.553 element at address: 0x200027e00000 with size: 0.398132 MiB 00:05:56.553 element at address: 0x200003a00000 with size: 0.351501 MiB 00:05:56.553 list of standard malloc elements. size: 199.251221 MiB 00:05:56.553 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:56.553 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:56.553 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:56.553 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:56.553 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:56.553 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:56.553 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:56.553 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:56.553 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:56.553 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:56.553 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:56.553 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:56.553 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:56.553 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:56.553 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:56.553 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:56.553 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:56.553 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:56.553 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:56.553 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:56.553 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:56.553 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:56.553 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:56.553 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:56.553 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:56.553 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:56.553 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:56.554 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6cb80 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:56.554 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:56.554 list of memzone associated elements. size: 602.262573 MiB 00:05:56.554 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:56.554 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:56.554 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:56.554 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:56.554 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:56.554 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68554_0 00:05:56.554 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:56.554 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68554_0 00:05:56.554 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:56.554 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68554_0 00:05:56.554 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:56.554 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:56.554 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:56.554 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:56.554 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:56.554 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68554 00:05:56.554 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:56.554 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68554 00:05:56.554 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:56.554 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68554 00:05:56.554 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:56.554 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:56.554 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:56.555 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:56.555 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:56.555 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:56.555 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:56.555 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:56.555 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:56.555 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68554 00:05:56.555 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:56.555 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68554 00:05:56.555 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:56.555 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68554 00:05:56.555 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:56.555 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68554 00:05:56.555 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:56.555 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68554 00:05:56.555 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:56.555 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:56.555 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:56.555 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:56.555 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:56.555 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:56.555 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:56.555 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68554 00:05:56.555 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:56.555 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:56.555 element at address: 0x200027e66040 with size: 0.023743 MiB 00:05:56.555 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:56.555 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:56.555 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68554 00:05:56.555 element at address: 0x200027e6c180 with size: 0.002441 MiB 00:05:56.555 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:56.555 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:56.555 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68554 00:05:56.555 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:56.555 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68554 00:05:56.555 element at address: 0x200027e6cc40 with size: 0.000305 MiB 00:05:56.555 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:56.555 07:57:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:56.555 07:57:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68554 00:05:56.555 07:57:07 -- common/autotest_common.sh@936 -- # '[' -z 68554 ']' 00:05:56.555 07:57:07 -- common/autotest_common.sh@940 -- # kill -0 68554 00:05:56.555 07:57:07 -- common/autotest_common.sh@941 -- # uname 00:05:56.555 07:57:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.555 07:57:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68554 00:05:56.555 07:57:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.555 07:57:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.555 killing process with pid 68554 00:05:56.555 07:57:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68554' 00:05:56.555 07:57:07 -- common/autotest_common.sh@955 -- # kill 68554 00:05:56.555 07:57:07 -- common/autotest_common.sh@960 -- # wait 68554 00:05:56.813 00:05:56.813 real 0m1.711s 00:05:56.813 user 0m1.822s 00:05:56.813 sys 0m0.432s 00:05:56.813 07:57:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.813 ************************************ 00:05:56.813 END TEST dpdk_mem_utility 00:05:56.813 ************************************ 00:05:56.813 07:57:08 -- common/autotest_common.sh@10 -- # set +x 00:05:57.071 07:57:08 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:57.071 07:57:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.071 07:57:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.071 07:57:08 -- common/autotest_common.sh@10 -- # set +x 00:05:57.071 ************************************ 00:05:57.071 START TEST event 00:05:57.071 ************************************ 00:05:57.071 07:57:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:57.071 * Looking for test storage... 00:05:57.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:57.071 07:57:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:57.071 07:57:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:57.071 07:57:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:57.071 07:57:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:57.071 07:57:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:57.071 07:57:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:57.071 07:57:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:57.071 07:57:08 -- scripts/common.sh@335 -- # IFS=.-: 00:05:57.071 07:57:08 -- scripts/common.sh@335 -- # read -ra ver1 00:05:57.071 07:57:08 -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.071 07:57:08 -- scripts/common.sh@336 -- # read -ra ver2 00:05:57.071 07:57:08 -- scripts/common.sh@337 -- # local 'op=<' 00:05:57.071 07:57:08 -- scripts/common.sh@339 -- # ver1_l=2 00:05:57.071 07:57:08 -- scripts/common.sh@340 -- # ver2_l=1 00:05:57.071 07:57:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:57.071 07:57:08 -- scripts/common.sh@343 -- # case "$op" in 00:05:57.071 07:57:08 -- scripts/common.sh@344 -- # : 1 00:05:57.071 07:57:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:57.071 07:57:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.071 07:57:08 -- scripts/common.sh@364 -- # decimal 1 00:05:57.071 07:57:08 -- scripts/common.sh@352 -- # local d=1 00:05:57.071 07:57:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.071 07:57:08 -- scripts/common.sh@354 -- # echo 1 00:05:57.071 07:57:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:57.071 07:57:08 -- scripts/common.sh@365 -- # decimal 2 00:05:57.071 07:57:08 -- scripts/common.sh@352 -- # local d=2 00:05:57.071 07:57:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.071 07:57:08 -- scripts/common.sh@354 -- # echo 2 00:05:57.071 07:57:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:57.071 07:57:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:57.071 07:57:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:57.071 07:57:08 -- scripts/common.sh@367 -- # return 0 00:05:57.071 07:57:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.071 07:57:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:57.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.071 --rc genhtml_branch_coverage=1 00:05:57.071 --rc genhtml_function_coverage=1 00:05:57.071 --rc genhtml_legend=1 00:05:57.071 --rc geninfo_all_blocks=1 00:05:57.071 --rc geninfo_unexecuted_blocks=1 00:05:57.071 00:05:57.071 ' 00:05:57.071 07:57:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:57.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.071 --rc genhtml_branch_coverage=1 00:05:57.071 --rc genhtml_function_coverage=1 00:05:57.071 --rc genhtml_legend=1 00:05:57.071 --rc geninfo_all_blocks=1 00:05:57.071 --rc geninfo_unexecuted_blocks=1 00:05:57.071 00:05:57.071 ' 00:05:57.071 07:57:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:57.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.071 --rc genhtml_branch_coverage=1 00:05:57.071 --rc genhtml_function_coverage=1 00:05:57.071 --rc genhtml_legend=1 00:05:57.071 --rc geninfo_all_blocks=1 00:05:57.071 --rc geninfo_unexecuted_blocks=1 00:05:57.071 00:05:57.071 ' 00:05:57.071 07:57:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:57.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.071 --rc genhtml_branch_coverage=1 00:05:57.071 --rc genhtml_function_coverage=1 00:05:57.071 --rc genhtml_legend=1 00:05:57.071 --rc geninfo_all_blocks=1 00:05:57.071 --rc geninfo_unexecuted_blocks=1 00:05:57.071 00:05:57.071 ' 00:05:57.071 07:57:08 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:57.071 07:57:08 -- bdev/nbd_common.sh@6 -- # set -e 00:05:57.071 07:57:08 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:57.071 07:57:08 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:57.071 07:57:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.071 07:57:08 -- common/autotest_common.sh@10 -- # set +x 00:05:57.071 ************************************ 00:05:57.071 START TEST event_perf 00:05:57.071 ************************************ 00:05:57.071 07:57:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:57.071 Running I/O for 1 seconds...[2024-12-07 07:57:08.300471] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:57.071 [2024-12-07 07:57:08.300579] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68645 ] 00:05:57.329 [2024-12-07 07:57:08.437999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.329 [2024-12-07 07:57:08.514289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.329 [2024-12-07 07:57:08.514405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.330 [2024-12-07 07:57:08.514542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.330 [2024-12-07 07:57:08.514545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.706 Running I/O for 1 seconds... 00:05:58.706 lcore 0: 208931 00:05:58.706 lcore 1: 208930 00:05:58.706 lcore 2: 208930 00:05:58.706 lcore 3: 208929 00:05:58.706 done. 00:05:58.706 00:05:58.706 real 0m1.294s 00:05:58.706 user 0m4.117s 00:05:58.706 sys 0m0.060s 00:05:58.706 07:57:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.706 ************************************ 00:05:58.706 07:57:09 -- common/autotest_common.sh@10 -- # set +x 00:05:58.706 END TEST event_perf 00:05:58.706 ************************************ 00:05:58.706 07:57:09 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:58.706 07:57:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:58.706 07:57:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.706 07:57:09 -- common/autotest_common.sh@10 -- # set +x 00:05:58.706 ************************************ 00:05:58.706 START TEST event_reactor 00:05:58.706 ************************************ 00:05:58.706 07:57:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:58.706 [2024-12-07 07:57:09.647058] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.706 [2024-12-07 07:57:09.647163] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68689 ] 00:05:58.706 [2024-12-07 07:57:09.778176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.706 [2024-12-07 07:57:09.835923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.640 test_start 00:05:59.640 oneshot 00:05:59.640 tick 100 00:05:59.640 tick 100 00:05:59.640 tick 250 00:05:59.640 tick 100 00:05:59.640 tick 100 00:05:59.640 tick 100 00:05:59.640 tick 250 00:05:59.640 tick 500 00:05:59.640 tick 100 00:05:59.640 tick 100 00:05:59.640 tick 250 00:05:59.640 tick 100 00:05:59.640 tick 100 00:05:59.640 test_end 00:05:59.640 00:05:59.640 real 0m1.260s 00:05:59.640 user 0m1.108s 00:05:59.640 sys 0m0.046s 00:05:59.640 07:57:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.640 ************************************ 00:05:59.640 END TEST event_reactor 00:05:59.640 ************************************ 00:05:59.640 07:57:10 -- common/autotest_common.sh@10 -- # set +x 00:05:59.899 07:57:10 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:59.899 07:57:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:59.899 07:57:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.899 07:57:10 -- common/autotest_common.sh@10 -- # set +x 00:05:59.899 ************************************ 00:05:59.899 START TEST event_reactor_perf 00:05:59.899 ************************************ 00:05:59.899 07:57:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:59.899 [2024-12-07 07:57:10.957764] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.899 [2024-12-07 07:57:10.957862] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68719 ] 00:05:59.899 [2024-12-07 07:57:11.093848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.899 [2024-12-07 07:57:11.149736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.272 test_start 00:06:01.272 test_end 00:06:01.272 Performance: 464134 events per second 00:06:01.272 00:06:01.272 real 0m1.264s 00:06:01.272 user 0m1.097s 00:06:01.272 sys 0m0.061s 00:06:01.272 07:57:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.272 ************************************ 00:06:01.272 END TEST event_reactor_perf 00:06:01.272 ************************************ 00:06:01.272 07:57:12 -- common/autotest_common.sh@10 -- # set +x 00:06:01.272 07:57:12 -- event/event.sh@49 -- # uname -s 00:06:01.272 07:57:12 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:01.272 07:57:12 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:01.272 07:57:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.272 07:57:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.272 07:57:12 -- common/autotest_common.sh@10 -- # set +x 00:06:01.272 ************************************ 00:06:01.272 START TEST event_scheduler 00:06:01.272 ************************************ 00:06:01.272 07:57:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:01.272 * Looking for test storage... 00:06:01.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:01.272 07:57:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:01.272 07:57:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:01.272 07:57:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:01.272 07:57:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:01.272 07:57:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:01.272 07:57:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:01.272 07:57:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:01.272 07:57:12 -- scripts/common.sh@335 -- # IFS=.-: 00:06:01.272 07:57:12 -- scripts/common.sh@335 -- # read -ra ver1 00:06:01.272 07:57:12 -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.272 07:57:12 -- scripts/common.sh@336 -- # read -ra ver2 00:06:01.272 07:57:12 -- scripts/common.sh@337 -- # local 'op=<' 00:06:01.272 07:57:12 -- scripts/common.sh@339 -- # ver1_l=2 00:06:01.272 07:57:12 -- scripts/common.sh@340 -- # ver2_l=1 00:06:01.272 07:57:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:01.272 07:57:12 -- scripts/common.sh@343 -- # case "$op" in 00:06:01.272 07:57:12 -- scripts/common.sh@344 -- # : 1 00:06:01.272 07:57:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:01.272 07:57:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.272 07:57:12 -- scripts/common.sh@364 -- # decimal 1 00:06:01.272 07:57:12 -- scripts/common.sh@352 -- # local d=1 00:06:01.272 07:57:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.272 07:57:12 -- scripts/common.sh@354 -- # echo 1 00:06:01.272 07:57:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:01.272 07:57:12 -- scripts/common.sh@365 -- # decimal 2 00:06:01.272 07:57:12 -- scripts/common.sh@352 -- # local d=2 00:06:01.272 07:57:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.272 07:57:12 -- scripts/common.sh@354 -- # echo 2 00:06:01.272 07:57:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:01.272 07:57:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:01.272 07:57:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:01.272 07:57:12 -- scripts/common.sh@367 -- # return 0 00:06:01.272 07:57:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.272 07:57:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:01.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.272 --rc genhtml_branch_coverage=1 00:06:01.272 --rc genhtml_function_coverage=1 00:06:01.272 --rc genhtml_legend=1 00:06:01.272 --rc geninfo_all_blocks=1 00:06:01.272 --rc geninfo_unexecuted_blocks=1 00:06:01.272 00:06:01.272 ' 00:06:01.272 07:57:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:01.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.272 --rc genhtml_branch_coverage=1 00:06:01.272 --rc genhtml_function_coverage=1 00:06:01.272 --rc genhtml_legend=1 00:06:01.272 --rc geninfo_all_blocks=1 00:06:01.272 --rc geninfo_unexecuted_blocks=1 00:06:01.272 00:06:01.272 ' 00:06:01.272 07:57:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:01.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.272 --rc genhtml_branch_coverage=1 00:06:01.272 --rc genhtml_function_coverage=1 00:06:01.272 --rc genhtml_legend=1 00:06:01.272 --rc geninfo_all_blocks=1 00:06:01.272 --rc geninfo_unexecuted_blocks=1 00:06:01.272 00:06:01.272 ' 00:06:01.272 07:57:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:01.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.272 --rc genhtml_branch_coverage=1 00:06:01.272 --rc genhtml_function_coverage=1 00:06:01.272 --rc genhtml_legend=1 00:06:01.272 --rc geninfo_all_blocks=1 00:06:01.272 --rc geninfo_unexecuted_blocks=1 00:06:01.272 00:06:01.272 ' 00:06:01.272 07:57:12 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:01.272 07:57:12 -- scheduler/scheduler.sh@35 -- # scheduler_pid=68782 00:06:01.272 07:57:12 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.272 07:57:12 -- scheduler/scheduler.sh@37 -- # waitforlisten 68782 00:06:01.272 07:57:12 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:01.272 07:57:12 -- common/autotest_common.sh@829 -- # '[' -z 68782 ']' 00:06:01.272 07:57:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.272 07:57:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.272 07:57:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.272 07:57:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.272 07:57:12 -- common/autotest_common.sh@10 -- # set +x 00:06:01.272 [2024-12-07 07:57:12.499742] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:01.272 [2024-12-07 07:57:12.499848] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68782 ] 00:06:01.531 [2024-12-07 07:57:12.641953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.531 [2024-12-07 07:57:12.725701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.531 [2024-12-07 07:57:12.725828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.531 [2024-12-07 07:57:12.725929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.531 [2024-12-07 07:57:12.725938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.465 07:57:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.465 07:57:13 -- common/autotest_common.sh@862 -- # return 0 00:06:02.465 07:57:13 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:02.465 07:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.465 07:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:02.465 POWER: Env isn't set yet! 00:06:02.465 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:02.465 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:02.465 POWER: Cannot set governor of lcore 0 to userspace 00:06:02.465 POWER: Attempting to initialise PSTAT power management... 00:06:02.465 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:02.465 POWER: Cannot set governor of lcore 0 to performance 00:06:02.465 POWER: Attempting to initialise AMD PSTATE power management... 00:06:02.465 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:02.465 POWER: Cannot set governor of lcore 0 to userspace 00:06:02.465 POWER: Attempting to initialise CPPC power management... 00:06:02.465 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:02.465 POWER: Cannot set governor of lcore 0 to userspace 00:06:02.465 POWER: Attempting to initialise VM power management... 00:06:02.465 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:02.465 POWER: Unable to set Power Management Environment for lcore 0 00:06:02.465 [2024-12-07 07:57:13.511629] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:02.465 [2024-12-07 07:57:13.511673] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:02.465 [2024-12-07 07:57:13.511697] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:02.465 [2024-12-07 07:57:13.511720] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:02.465 [2024-12-07 07:57:13.511729] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:02.465 [2024-12-07 07:57:13.511736] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:02.465 07:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.465 07:57:13 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:02.465 07:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.465 07:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:02.465 [2024-12-07 07:57:13.604912] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:02.465 07:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.465 07:57:13 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:02.465 07:57:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.465 07:57:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.465 07:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:02.465 ************************************ 00:06:02.465 START TEST scheduler_create_thread 00:06:02.465 ************************************ 00:06:02.465 07:57:13 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:06:02.465 07:57:13 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:02.465 07:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.465 07:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:02.465 2 00:06:02.465 07:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.465 07:57:13 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:02.465 07:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.465 07:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:02.465 3 00:06:02.465 07:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.466 07:57:13 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:02.466 07:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.466 07:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:02.466 4 00:06:02.466 07:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.466 07:57:13 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:02.466 07:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.466 07:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:02.466 5 00:06:02.466 07:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.466 07:57:13 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:02.466 07:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.466 07:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:02.466 6 00:06:02.466 07:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.466 07:57:13 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:02.466 07:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.466 07:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:02.466 7 00:06:02.466 07:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.466 07:57:13 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:02.466 07:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.466 07:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:02.466 8 00:06:02.466 07:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.466 07:57:13 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:02.466 07:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.466 07:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:02.466 9 00:06:02.466 07:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.466 07:57:13 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:02.466 07:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.466 07:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:02.466 10 00:06:02.466 07:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.466 07:57:13 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:02.466 07:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.466 07:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:02.466 07:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.466 07:57:13 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:02.466 07:57:13 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:02.466 07:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.466 07:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:02.466 07:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.466 07:57:13 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:02.466 07:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.466 07:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:04.366 07:57:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.366 07:57:15 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:04.366 07:57:15 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:04.366 07:57:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.366 07:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:05.301 ************************************ 00:06:05.301 END TEST scheduler_create_thread 00:06:05.301 ************************************ 00:06:05.301 07:57:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.301 00:06:05.301 real 0m2.610s 00:06:05.301 user 0m0.016s 00:06:05.301 sys 0m0.007s 00:06:05.301 07:57:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.301 07:57:16 -- common/autotest_common.sh@10 -- # set +x 00:06:05.301 07:57:16 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:05.301 07:57:16 -- scheduler/scheduler.sh@46 -- # killprocess 68782 00:06:05.301 07:57:16 -- common/autotest_common.sh@936 -- # '[' -z 68782 ']' 00:06:05.301 07:57:16 -- common/autotest_common.sh@940 -- # kill -0 68782 00:06:05.301 07:57:16 -- common/autotest_common.sh@941 -- # uname 00:06:05.301 07:57:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:05.301 07:57:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68782 00:06:05.301 killing process with pid 68782 00:06:05.301 07:57:16 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:05.301 07:57:16 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:05.301 07:57:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68782' 00:06:05.301 07:57:16 -- common/autotest_common.sh@955 -- # kill 68782 00:06:05.301 07:57:16 -- common/autotest_common.sh@960 -- # wait 68782 00:06:05.559 [2024-12-07 07:57:16.708907] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:05.819 00:06:05.819 real 0m4.718s 00:06:05.819 user 0m8.976s 00:06:05.819 sys 0m0.385s 00:06:05.819 07:57:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.819 ************************************ 00:06:05.819 END TEST event_scheduler 00:06:05.819 07:57:16 -- common/autotest_common.sh@10 -- # set +x 00:06:05.819 ************************************ 00:06:05.819 07:57:17 -- event/event.sh@51 -- # modprobe -n nbd 00:06:05.819 07:57:17 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:05.819 07:57:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.819 07:57:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.819 07:57:17 -- common/autotest_common.sh@10 -- # set +x 00:06:05.819 ************************************ 00:06:05.819 START TEST app_repeat 00:06:05.819 ************************************ 00:06:05.819 07:57:17 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:06:05.819 07:57:17 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.819 07:57:17 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.819 07:57:17 -- event/event.sh@13 -- # local nbd_list 00:06:05.819 07:57:17 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.819 07:57:17 -- event/event.sh@14 -- # local bdev_list 00:06:05.819 07:57:17 -- event/event.sh@15 -- # local repeat_times=4 00:06:05.819 07:57:17 -- event/event.sh@17 -- # modprobe nbd 00:06:05.819 07:57:17 -- event/event.sh@19 -- # repeat_pid=68906 00:06:05.819 07:57:17 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.819 07:57:17 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:05.819 Process app_repeat pid: 68906 00:06:05.819 07:57:17 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68906' 00:06:05.819 07:57:17 -- event/event.sh@23 -- # for i in {0..2} 00:06:05.819 07:57:17 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:05.819 spdk_app_start Round 0 00:06:05.819 07:57:17 -- event/event.sh@25 -- # waitforlisten 68906 /var/tmp/spdk-nbd.sock 00:06:05.819 07:57:17 -- common/autotest_common.sh@829 -- # '[' -z 68906 ']' 00:06:05.819 07:57:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.819 07:57:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.819 07:57:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.819 07:57:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.819 07:57:17 -- common/autotest_common.sh@10 -- # set +x 00:06:05.819 [2024-12-07 07:57:17.064769] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.819 [2024-12-07 07:57:17.064867] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68906 ] 00:06:06.078 [2024-12-07 07:57:17.200534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.078 [2024-12-07 07:57:17.266910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.078 [2024-12-07 07:57:17.266918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.014 07:57:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.014 07:57:18 -- common/autotest_common.sh@862 -- # return 0 00:06:07.014 07:57:18 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.274 Malloc0 00:06:07.274 07:57:18 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:07.533 Malloc1 00:06:07.533 07:57:18 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.533 07:57:18 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.533 07:57:18 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.533 07:57:18 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:07.533 07:57:18 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.533 07:57:18 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:07.533 07:57:18 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:07.534 07:57:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.534 07:57:18 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.534 07:57:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:07.534 07:57:18 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.534 07:57:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:07.534 07:57:18 -- bdev/nbd_common.sh@12 -- # local i 00:06:07.534 07:57:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:07.534 07:57:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.534 07:57:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.793 /dev/nbd0 00:06:07.793 07:57:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.793 07:57:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.793 07:57:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:07.793 07:57:18 -- common/autotest_common.sh@867 -- # local i 00:06:07.793 07:57:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:07.793 07:57:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:07.793 07:57:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:07.793 07:57:18 -- common/autotest_common.sh@871 -- # break 00:06:07.793 07:57:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:07.793 07:57:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:07.793 07:57:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.793 1+0 records in 00:06:07.793 1+0 records out 00:06:07.793 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405774 s, 10.1 MB/s 00:06:07.793 07:57:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.793 07:57:18 -- common/autotest_common.sh@884 -- # size=4096 00:06:07.793 07:57:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.793 07:57:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:07.793 07:57:18 -- common/autotest_common.sh@887 -- # return 0 00:06:07.793 07:57:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.793 07:57:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.793 07:57:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.052 /dev/nbd1 00:06:08.052 07:57:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:08.052 07:57:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:08.052 07:57:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:08.052 07:57:19 -- common/autotest_common.sh@867 -- # local i 00:06:08.052 07:57:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:08.052 07:57:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:08.052 07:57:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:08.052 07:57:19 -- common/autotest_common.sh@871 -- # break 00:06:08.052 07:57:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:08.052 07:57:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:08.052 07:57:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.052 1+0 records in 00:06:08.052 1+0 records out 00:06:08.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354979 s, 11.5 MB/s 00:06:08.052 07:57:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.052 07:57:19 -- common/autotest_common.sh@884 -- # size=4096 00:06:08.052 07:57:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.052 07:57:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:08.052 07:57:19 -- common/autotest_common.sh@887 -- # return 0 00:06:08.052 07:57:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.052 07:57:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.052 07:57:19 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.052 07:57:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.052 07:57:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.311 07:57:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:08.311 { 00:06:08.311 "bdev_name": "Malloc0", 00:06:08.311 "nbd_device": "/dev/nbd0" 00:06:08.311 }, 00:06:08.311 { 00:06:08.311 "bdev_name": "Malloc1", 00:06:08.311 "nbd_device": "/dev/nbd1" 00:06:08.311 } 00:06:08.311 ]' 00:06:08.311 07:57:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:08.311 { 00:06:08.311 "bdev_name": "Malloc0", 00:06:08.311 "nbd_device": "/dev/nbd0" 00:06:08.311 }, 00:06:08.311 { 00:06:08.311 "bdev_name": "Malloc1", 00:06:08.311 "nbd_device": "/dev/nbd1" 00:06:08.311 } 00:06:08.311 ]' 00:06:08.311 07:57:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.311 07:57:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:08.311 /dev/nbd1' 00:06:08.311 07:57:19 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:08.311 /dev/nbd1' 00:06:08.311 07:57:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.311 07:57:19 -- bdev/nbd_common.sh@65 -- # count=2 00:06:08.311 07:57:19 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:08.311 07:57:19 -- bdev/nbd_common.sh@95 -- # count=2 00:06:08.311 07:57:19 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:08.311 07:57:19 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:08.311 07:57:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.311 07:57:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.311 07:57:19 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:08.311 07:57:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.311 07:57:19 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:08.311 07:57:19 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:08.311 256+0 records in 00:06:08.311 256+0 records out 00:06:08.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0089321 s, 117 MB/s 00:06:08.311 07:57:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.311 07:57:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:08.570 256+0 records in 00:06:08.570 256+0 records out 00:06:08.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023975 s, 43.7 MB/s 00:06:08.570 07:57:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:08.570 07:57:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:08.570 256+0 records in 00:06:08.570 256+0 records out 00:06:08.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0300919 s, 34.8 MB/s 00:06:08.570 07:57:19 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:08.570 07:57:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.570 07:57:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.570 07:57:19 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:08.570 07:57:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.570 07:57:19 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:08.570 07:57:19 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:08.570 07:57:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.570 07:57:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:08.570 07:57:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.570 07:57:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:08.570 07:57:19 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.570 07:57:19 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:08.570 07:57:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.570 07:57:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.570 07:57:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.570 07:57:19 -- bdev/nbd_common.sh@51 -- # local i 00:06:08.570 07:57:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.570 07:57:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:08.830 07:57:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:08.830 07:57:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:08.830 07:57:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.830 07:57:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.830 07:57:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.830 07:57:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.830 07:57:19 -- bdev/nbd_common.sh@41 -- # break 00:06:08.830 07:57:19 -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.830 07:57:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.830 07:57:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.089 07:57:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.089 07:57:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.089 07:57:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.089 07:57:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.089 07:57:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.089 07:57:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.089 07:57:20 -- bdev/nbd_common.sh@41 -- # break 00:06:09.089 07:57:20 -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.089 07:57:20 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.089 07:57:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.089 07:57:20 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.347 07:57:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.347 07:57:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.347 07:57:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.347 07:57:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.347 07:57:20 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.347 07:57:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.347 07:57:20 -- bdev/nbd_common.sh@65 -- # true 00:06:09.347 07:57:20 -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.347 07:57:20 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.347 07:57:20 -- bdev/nbd_common.sh@104 -- # count=0 00:06:09.347 07:57:20 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:09.347 07:57:20 -- bdev/nbd_common.sh@109 -- # return 0 00:06:09.347 07:57:20 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.605 07:57:20 -- event/event.sh@35 -- # sleep 3 00:06:09.864 [2024-12-07 07:57:20.939153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.864 [2024-12-07 07:57:20.988672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.864 [2024-12-07 07:57:20.988684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.864 [2024-12-07 07:57:21.042203] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.864 [2024-12-07 07:57:21.042268] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.150 07:57:23 -- event/event.sh@23 -- # for i in {0..2} 00:06:13.150 spdk_app_start Round 1 00:06:13.150 07:57:23 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:13.150 07:57:23 -- event/event.sh@25 -- # waitforlisten 68906 /var/tmp/spdk-nbd.sock 00:06:13.150 07:57:23 -- common/autotest_common.sh@829 -- # '[' -z 68906 ']' 00:06:13.150 07:57:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.150 07:57:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.150 07:57:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.150 07:57:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.150 07:57:23 -- common/autotest_common.sh@10 -- # set +x 00:06:13.150 07:57:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.150 07:57:24 -- common/autotest_common.sh@862 -- # return 0 00:06:13.150 07:57:24 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.151 Malloc0 00:06:13.151 07:57:24 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:13.409 Malloc1 00:06:13.409 07:57:24 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.409 07:57:24 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.409 07:57:24 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.409 07:57:24 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:13.409 07:57:24 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.409 07:57:24 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:13.409 07:57:24 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:13.410 07:57:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.410 07:57:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:13.410 07:57:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:13.410 07:57:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.410 07:57:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:13.410 07:57:24 -- bdev/nbd_common.sh@12 -- # local i 00:06:13.410 07:57:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:13.410 07:57:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.410 07:57:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:13.668 /dev/nbd0 00:06:13.668 07:57:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:13.668 07:57:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:13.668 07:57:24 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:13.668 07:57:24 -- common/autotest_common.sh@867 -- # local i 00:06:13.668 07:57:24 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:13.668 07:57:24 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:13.668 07:57:24 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:13.668 07:57:24 -- common/autotest_common.sh@871 -- # break 00:06:13.668 07:57:24 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:13.668 07:57:24 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:13.669 07:57:24 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.669 1+0 records in 00:06:13.669 1+0 records out 00:06:13.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315838 s, 13.0 MB/s 00:06:13.669 07:57:24 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.669 07:57:24 -- common/autotest_common.sh@884 -- # size=4096 00:06:13.669 07:57:24 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.669 07:57:24 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:13.669 07:57:24 -- common/autotest_common.sh@887 -- # return 0 00:06:13.669 07:57:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.669 07:57:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.669 07:57:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.928 /dev/nbd1 00:06:13.928 07:57:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.928 07:57:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.928 07:57:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:13.928 07:57:25 -- common/autotest_common.sh@867 -- # local i 00:06:13.928 07:57:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:13.928 07:57:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:13.928 07:57:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:13.928 07:57:25 -- common/autotest_common.sh@871 -- # break 00:06:13.928 07:57:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:13.928 07:57:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:13.928 07:57:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.928 1+0 records in 00:06:13.928 1+0 records out 00:06:13.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274158 s, 14.9 MB/s 00:06:13.928 07:57:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.928 07:57:25 -- common/autotest_common.sh@884 -- # size=4096 00:06:13.928 07:57:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.928 07:57:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:13.928 07:57:25 -- common/autotest_common.sh@887 -- # return 0 00:06:13.928 07:57:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.928 07:57:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.928 07:57:25 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.928 07:57:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.928 07:57:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.186 07:57:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.186 { 00:06:14.186 "bdev_name": "Malloc0", 00:06:14.186 "nbd_device": "/dev/nbd0" 00:06:14.186 }, 00:06:14.186 { 00:06:14.186 "bdev_name": "Malloc1", 00:06:14.186 "nbd_device": "/dev/nbd1" 00:06:14.186 } 00:06:14.186 ]' 00:06:14.186 07:57:25 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.186 { 00:06:14.186 "bdev_name": "Malloc0", 00:06:14.186 "nbd_device": "/dev/nbd0" 00:06:14.186 }, 00:06:14.186 { 00:06:14.186 "bdev_name": "Malloc1", 00:06:14.186 "nbd_device": "/dev/nbd1" 00:06:14.186 } 00:06:14.186 ]' 00:06:14.186 07:57:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.444 /dev/nbd1' 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.444 /dev/nbd1' 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.444 256+0 records in 00:06:14.444 256+0 records out 00:06:14.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00666186 s, 157 MB/s 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.444 256+0 records in 00:06:14.444 256+0 records out 00:06:14.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024063 s, 43.6 MB/s 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.444 256+0 records in 00:06:14.444 256+0 records out 00:06:14.444 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306191 s, 34.2 MB/s 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@51 -- # local i 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.444 07:57:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.702 07:57:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.703 07:57:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.703 07:57:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.703 07:57:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.703 07:57:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.703 07:57:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.703 07:57:25 -- bdev/nbd_common.sh@41 -- # break 00:06:14.703 07:57:25 -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.703 07:57:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.703 07:57:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.961 07:57:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.961 07:57:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.961 07:57:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.961 07:57:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.961 07:57:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.961 07:57:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.961 07:57:26 -- bdev/nbd_common.sh@41 -- # break 00:06:14.961 07:57:26 -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.961 07:57:26 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.961 07:57:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.961 07:57:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.220 07:57:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.220 07:57:26 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.220 07:57:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.220 07:57:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.220 07:57:26 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.220 07:57:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.220 07:57:26 -- bdev/nbd_common.sh@65 -- # true 00:06:15.220 07:57:26 -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.220 07:57:26 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.220 07:57:26 -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.220 07:57:26 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.220 07:57:26 -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.220 07:57:26 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.478 07:57:26 -- event/event.sh@35 -- # sleep 3 00:06:15.737 [2024-12-07 07:57:26.859684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.737 [2024-12-07 07:57:26.902555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.738 [2024-12-07 07:57:26.902559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.738 [2024-12-07 07:57:26.954658] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.738 [2024-12-07 07:57:26.954724] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:19.029 spdk_app_start Round 2 00:06:19.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.029 07:57:29 -- event/event.sh@23 -- # for i in {0..2} 00:06:19.029 07:57:29 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:19.029 07:57:29 -- event/event.sh@25 -- # waitforlisten 68906 /var/tmp/spdk-nbd.sock 00:06:19.029 07:57:29 -- common/autotest_common.sh@829 -- # '[' -z 68906 ']' 00:06:19.029 07:57:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.029 07:57:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.029 07:57:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.029 07:57:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.029 07:57:29 -- common/autotest_common.sh@10 -- # set +x 00:06:19.029 07:57:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.029 07:57:29 -- common/autotest_common.sh@862 -- # return 0 00:06:19.029 07:57:29 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.029 Malloc0 00:06:19.029 07:57:30 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.287 Malloc1 00:06:19.287 07:57:30 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.287 07:57:30 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.287 07:57:30 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.287 07:57:30 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.287 07:57:30 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.287 07:57:30 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.287 07:57:30 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.287 07:57:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.287 07:57:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.287 07:57:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.287 07:57:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.287 07:57:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.287 07:57:30 -- bdev/nbd_common.sh@12 -- # local i 00:06:19.287 07:57:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.287 07:57:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.287 07:57:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.544 /dev/nbd0 00:06:19.544 07:57:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.544 07:57:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.544 07:57:30 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:19.544 07:57:30 -- common/autotest_common.sh@867 -- # local i 00:06:19.544 07:57:30 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:19.544 07:57:30 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:19.544 07:57:30 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:19.544 07:57:30 -- common/autotest_common.sh@871 -- # break 00:06:19.544 07:57:30 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:19.544 07:57:30 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:19.544 07:57:30 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.544 1+0 records in 00:06:19.544 1+0 records out 00:06:19.544 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322265 s, 12.7 MB/s 00:06:19.544 07:57:30 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.544 07:57:30 -- common/autotest_common.sh@884 -- # size=4096 00:06:19.544 07:57:30 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.544 07:57:30 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:19.544 07:57:30 -- common/autotest_common.sh@887 -- # return 0 00:06:19.544 07:57:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.544 07:57:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.544 07:57:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.802 /dev/nbd1 00:06:19.802 07:57:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.802 07:57:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.802 07:57:30 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:19.802 07:57:30 -- common/autotest_common.sh@867 -- # local i 00:06:19.802 07:57:30 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:19.802 07:57:30 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:19.802 07:57:31 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:19.802 07:57:31 -- common/autotest_common.sh@871 -- # break 00:06:19.802 07:57:31 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:19.802 07:57:31 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:19.802 07:57:31 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.802 1+0 records in 00:06:19.802 1+0 records out 00:06:19.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002642 s, 15.5 MB/s 00:06:19.802 07:57:31 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.802 07:57:31 -- common/autotest_common.sh@884 -- # size=4096 00:06:19.802 07:57:31 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.802 07:57:31 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:19.802 07:57:31 -- common/autotest_common.sh@887 -- # return 0 00:06:19.802 07:57:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.802 07:57:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.802 07:57:31 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.802 07:57:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.802 07:57:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.059 07:57:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.059 { 00:06:20.059 "bdev_name": "Malloc0", 00:06:20.059 "nbd_device": "/dev/nbd0" 00:06:20.059 }, 00:06:20.059 { 00:06:20.059 "bdev_name": "Malloc1", 00:06:20.059 "nbd_device": "/dev/nbd1" 00:06:20.059 } 00:06:20.059 ]' 00:06:20.059 07:57:31 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.059 { 00:06:20.059 "bdev_name": "Malloc0", 00:06:20.059 "nbd_device": "/dev/nbd0" 00:06:20.059 }, 00:06:20.059 { 00:06:20.059 "bdev_name": "Malloc1", 00:06:20.059 "nbd_device": "/dev/nbd1" 00:06:20.059 } 00:06:20.059 ]' 00:06:20.059 07:57:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:20.318 /dev/nbd1' 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:20.318 /dev/nbd1' 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@65 -- # count=2 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@95 -- # count=2 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:20.318 256+0 records in 00:06:20.318 256+0 records out 00:06:20.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00871329 s, 120 MB/s 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.318 256+0 records in 00:06:20.318 256+0 records out 00:06:20.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023273 s, 45.1 MB/s 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.318 256+0 records in 00:06:20.318 256+0 records out 00:06:20.318 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256273 s, 40.9 MB/s 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@51 -- # local i 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.318 07:57:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.577 07:57:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.577 07:57:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.577 07:57:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.577 07:57:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.577 07:57:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.577 07:57:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.577 07:57:31 -- bdev/nbd_common.sh@41 -- # break 00:06:20.577 07:57:31 -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.577 07:57:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.577 07:57:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.835 07:57:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.835 07:57:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.835 07:57:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.835 07:57:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.835 07:57:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.835 07:57:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.835 07:57:31 -- bdev/nbd_common.sh@41 -- # break 00:06:20.835 07:57:31 -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.835 07:57:31 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.835 07:57:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.835 07:57:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.093 07:57:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.093 07:57:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.093 07:57:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.093 07:57:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.093 07:57:32 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.093 07:57:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.093 07:57:32 -- bdev/nbd_common.sh@65 -- # true 00:06:21.093 07:57:32 -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.093 07:57:32 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.093 07:57:32 -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.093 07:57:32 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.093 07:57:32 -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.093 07:57:32 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.351 07:57:32 -- event/event.sh@35 -- # sleep 3 00:06:21.609 [2024-12-07 07:57:32.675879] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.609 [2024-12-07 07:57:32.725735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.609 [2024-12-07 07:57:32.725747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.609 [2024-12-07 07:57:32.778932] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.609 [2024-12-07 07:57:32.778987] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.893 07:57:35 -- event/event.sh@38 -- # waitforlisten 68906 /var/tmp/spdk-nbd.sock 00:06:24.893 07:57:35 -- common/autotest_common.sh@829 -- # '[' -z 68906 ']' 00:06:24.893 07:57:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.893 07:57:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.893 07:57:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.893 07:57:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.893 07:57:35 -- common/autotest_common.sh@10 -- # set +x 00:06:24.893 07:57:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.893 07:57:35 -- common/autotest_common.sh@862 -- # return 0 00:06:24.893 07:57:35 -- event/event.sh@39 -- # killprocess 68906 00:06:24.893 07:57:35 -- common/autotest_common.sh@936 -- # '[' -z 68906 ']' 00:06:24.893 07:57:35 -- common/autotest_common.sh@940 -- # kill -0 68906 00:06:24.893 07:57:35 -- common/autotest_common.sh@941 -- # uname 00:06:24.893 07:57:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:24.893 07:57:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68906 00:06:24.893 killing process with pid 68906 00:06:24.893 07:57:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:24.893 07:57:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:24.893 07:57:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68906' 00:06:24.893 07:57:35 -- common/autotest_common.sh@955 -- # kill 68906 00:06:24.893 07:57:35 -- common/autotest_common.sh@960 -- # wait 68906 00:06:24.893 spdk_app_start is called in Round 0. 00:06:24.893 Shutdown signal received, stop current app iteration 00:06:24.893 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:24.893 spdk_app_start is called in Round 1. 00:06:24.893 Shutdown signal received, stop current app iteration 00:06:24.893 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:24.893 spdk_app_start is called in Round 2. 00:06:24.893 Shutdown signal received, stop current app iteration 00:06:24.893 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:24.893 spdk_app_start is called in Round 3. 00:06:24.893 Shutdown signal received, stop current app iteration 00:06:24.893 ************************************ 00:06:24.893 END TEST app_repeat 00:06:24.893 ************************************ 00:06:24.893 07:57:35 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:24.893 07:57:35 -- event/event.sh@42 -- # return 0 00:06:24.893 00:06:24.893 real 0m18.949s 00:06:24.893 user 0m42.839s 00:06:24.893 sys 0m2.764s 00:06:24.893 07:57:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.893 07:57:35 -- common/autotest_common.sh@10 -- # set +x 00:06:24.893 07:57:36 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:24.893 07:57:36 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:24.893 07:57:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.893 07:57:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.893 07:57:36 -- common/autotest_common.sh@10 -- # set +x 00:06:24.893 ************************************ 00:06:24.893 START TEST cpu_locks 00:06:24.893 ************************************ 00:06:24.893 07:57:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:24.893 * Looking for test storage... 00:06:24.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:24.893 07:57:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:24.893 07:57:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:24.893 07:57:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:25.151 07:57:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:25.151 07:57:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:25.151 07:57:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:25.151 07:57:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:25.151 07:57:36 -- scripts/common.sh@335 -- # IFS=.-: 00:06:25.151 07:57:36 -- scripts/common.sh@335 -- # read -ra ver1 00:06:25.151 07:57:36 -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.151 07:57:36 -- scripts/common.sh@336 -- # read -ra ver2 00:06:25.151 07:57:36 -- scripts/common.sh@337 -- # local 'op=<' 00:06:25.151 07:57:36 -- scripts/common.sh@339 -- # ver1_l=2 00:06:25.151 07:57:36 -- scripts/common.sh@340 -- # ver2_l=1 00:06:25.151 07:57:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:25.151 07:57:36 -- scripts/common.sh@343 -- # case "$op" in 00:06:25.151 07:57:36 -- scripts/common.sh@344 -- # : 1 00:06:25.151 07:57:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:25.151 07:57:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.151 07:57:36 -- scripts/common.sh@364 -- # decimal 1 00:06:25.151 07:57:36 -- scripts/common.sh@352 -- # local d=1 00:06:25.151 07:57:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.151 07:57:36 -- scripts/common.sh@354 -- # echo 1 00:06:25.152 07:57:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:25.152 07:57:36 -- scripts/common.sh@365 -- # decimal 2 00:06:25.152 07:57:36 -- scripts/common.sh@352 -- # local d=2 00:06:25.152 07:57:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.152 07:57:36 -- scripts/common.sh@354 -- # echo 2 00:06:25.152 07:57:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:25.152 07:57:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:25.152 07:57:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:25.152 07:57:36 -- scripts/common.sh@367 -- # return 0 00:06:25.152 07:57:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.152 07:57:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:25.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.152 --rc genhtml_branch_coverage=1 00:06:25.152 --rc genhtml_function_coverage=1 00:06:25.152 --rc genhtml_legend=1 00:06:25.152 --rc geninfo_all_blocks=1 00:06:25.152 --rc geninfo_unexecuted_blocks=1 00:06:25.152 00:06:25.152 ' 00:06:25.152 07:57:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:25.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.152 --rc genhtml_branch_coverage=1 00:06:25.152 --rc genhtml_function_coverage=1 00:06:25.152 --rc genhtml_legend=1 00:06:25.152 --rc geninfo_all_blocks=1 00:06:25.152 --rc geninfo_unexecuted_blocks=1 00:06:25.152 00:06:25.152 ' 00:06:25.152 07:57:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:25.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.152 --rc genhtml_branch_coverage=1 00:06:25.152 --rc genhtml_function_coverage=1 00:06:25.152 --rc genhtml_legend=1 00:06:25.152 --rc geninfo_all_blocks=1 00:06:25.152 --rc geninfo_unexecuted_blocks=1 00:06:25.152 00:06:25.152 ' 00:06:25.152 07:57:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:25.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.152 --rc genhtml_branch_coverage=1 00:06:25.152 --rc genhtml_function_coverage=1 00:06:25.152 --rc genhtml_legend=1 00:06:25.152 --rc geninfo_all_blocks=1 00:06:25.152 --rc geninfo_unexecuted_blocks=1 00:06:25.152 00:06:25.152 ' 00:06:25.152 07:57:36 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:25.152 07:57:36 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:25.152 07:57:36 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:25.152 07:57:36 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:25.152 07:57:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.152 07:57:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.152 07:57:36 -- common/autotest_common.sh@10 -- # set +x 00:06:25.152 ************************************ 00:06:25.152 START TEST default_locks 00:06:25.152 ************************************ 00:06:25.152 07:57:36 -- common/autotest_common.sh@1114 -- # default_locks 00:06:25.152 07:57:36 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69539 00:06:25.152 07:57:36 -- event/cpu_locks.sh@47 -- # waitforlisten 69539 00:06:25.152 07:57:36 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.152 07:57:36 -- common/autotest_common.sh@829 -- # '[' -z 69539 ']' 00:06:25.152 07:57:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.152 07:57:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.152 07:57:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.152 07:57:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.152 07:57:36 -- common/autotest_common.sh@10 -- # set +x 00:06:25.152 [2024-12-07 07:57:36.277703] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.152 [2024-12-07 07:57:36.278014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69539 ] 00:06:25.152 [2024-12-07 07:57:36.416134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.410 [2024-12-07 07:57:36.475388] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:25.410 [2024-12-07 07:57:36.475857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.346 07:57:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.346 07:57:37 -- common/autotest_common.sh@862 -- # return 0 00:06:26.346 07:57:37 -- event/cpu_locks.sh@49 -- # locks_exist 69539 00:06:26.346 07:57:37 -- event/cpu_locks.sh@22 -- # lslocks -p 69539 00:06:26.346 07:57:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.605 07:57:37 -- event/cpu_locks.sh@50 -- # killprocess 69539 00:06:26.605 07:57:37 -- common/autotest_common.sh@936 -- # '[' -z 69539 ']' 00:06:26.605 07:57:37 -- common/autotest_common.sh@940 -- # kill -0 69539 00:06:26.605 07:57:37 -- common/autotest_common.sh@941 -- # uname 00:06:26.605 07:57:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:26.605 07:57:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69539 00:06:26.605 killing process with pid 69539 00:06:26.605 07:57:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:26.605 07:57:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:26.605 07:57:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69539' 00:06:26.605 07:57:37 -- common/autotest_common.sh@955 -- # kill 69539 00:06:26.605 07:57:37 -- common/autotest_common.sh@960 -- # wait 69539 00:06:27.172 07:57:38 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69539 00:06:27.172 07:57:38 -- common/autotest_common.sh@650 -- # local es=0 00:06:27.172 07:57:38 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69539 00:06:27.172 07:57:38 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:27.172 07:57:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.172 07:57:38 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:27.172 07:57:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.172 07:57:38 -- common/autotest_common.sh@653 -- # waitforlisten 69539 00:06:27.172 07:57:38 -- common/autotest_common.sh@829 -- # '[' -z 69539 ']' 00:06:27.172 07:57:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.172 07:57:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.172 07:57:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.172 07:57:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.172 ERROR: process (pid: 69539) is no longer running 00:06:27.173 07:57:38 -- common/autotest_common.sh@10 -- # set +x 00:06:27.173 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69539) - No such process 00:06:27.173 07:57:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.173 07:57:38 -- common/autotest_common.sh@862 -- # return 1 00:06:27.173 07:57:38 -- common/autotest_common.sh@653 -- # es=1 00:06:27.173 07:57:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:27.173 07:57:38 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:27.173 07:57:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:27.173 07:57:38 -- event/cpu_locks.sh@54 -- # no_locks 00:06:27.173 07:57:38 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:27.173 07:57:38 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:27.173 07:57:38 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:27.173 00:06:27.173 real 0m1.942s 00:06:27.173 user 0m2.133s 00:06:27.173 sys 0m0.581s 00:06:27.173 07:57:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.173 07:57:38 -- common/autotest_common.sh@10 -- # set +x 00:06:27.173 ************************************ 00:06:27.173 END TEST default_locks 00:06:27.173 ************************************ 00:06:27.173 07:57:38 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:27.173 07:57:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.173 07:57:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.173 07:57:38 -- common/autotest_common.sh@10 -- # set +x 00:06:27.173 ************************************ 00:06:27.173 START TEST default_locks_via_rpc 00:06:27.173 ************************************ 00:06:27.173 07:57:38 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:27.173 07:57:38 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69603 00:06:27.173 07:57:38 -- event/cpu_locks.sh@63 -- # waitforlisten 69603 00:06:27.173 07:57:38 -- common/autotest_common.sh@829 -- # '[' -z 69603 ']' 00:06:27.173 07:57:38 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.173 07:57:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.173 07:57:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.173 07:57:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.173 07:57:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.173 07:57:38 -- common/autotest_common.sh@10 -- # set +x 00:06:27.173 [2024-12-07 07:57:38.269143] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.173 [2024-12-07 07:57:38.269270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69603 ] 00:06:27.173 [2024-12-07 07:57:38.409194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.432 [2024-12-07 07:57:38.469538] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:27.432 [2024-12-07 07:57:38.469732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.368 07:57:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.368 07:57:39 -- common/autotest_common.sh@862 -- # return 0 00:06:28.368 07:57:39 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:28.368 07:57:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.368 07:57:39 -- common/autotest_common.sh@10 -- # set +x 00:06:28.368 07:57:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.368 07:57:39 -- event/cpu_locks.sh@67 -- # no_locks 00:06:28.368 07:57:39 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:28.369 07:57:39 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:28.369 07:57:39 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:28.369 07:57:39 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:28.369 07:57:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.369 07:57:39 -- common/autotest_common.sh@10 -- # set +x 00:06:28.369 07:57:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.369 07:57:39 -- event/cpu_locks.sh@71 -- # locks_exist 69603 00:06:28.369 07:57:39 -- event/cpu_locks.sh@22 -- # lslocks -p 69603 00:06:28.369 07:57:39 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.627 07:57:39 -- event/cpu_locks.sh@73 -- # killprocess 69603 00:06:28.627 07:57:39 -- common/autotest_common.sh@936 -- # '[' -z 69603 ']' 00:06:28.627 07:57:39 -- common/autotest_common.sh@940 -- # kill -0 69603 00:06:28.627 07:57:39 -- common/autotest_common.sh@941 -- # uname 00:06:28.627 07:57:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:28.628 07:57:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69603 00:06:28.628 killing process with pid 69603 00:06:28.628 07:57:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:28.628 07:57:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:28.628 07:57:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69603' 00:06:28.628 07:57:39 -- common/autotest_common.sh@955 -- # kill 69603 00:06:28.628 07:57:39 -- common/autotest_common.sh@960 -- # wait 69603 00:06:28.887 00:06:28.887 real 0m1.910s 00:06:28.887 user 0m2.120s 00:06:28.887 sys 0m0.577s 00:06:28.887 ************************************ 00:06:28.887 END TEST default_locks_via_rpc 00:06:28.887 ************************************ 00:06:28.887 07:57:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.887 07:57:40 -- common/autotest_common.sh@10 -- # set +x 00:06:28.887 07:57:40 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:28.887 07:57:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.887 07:57:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.887 07:57:40 -- common/autotest_common.sh@10 -- # set +x 00:06:29.147 ************************************ 00:06:29.147 START TEST non_locking_app_on_locked_coremask 00:06:29.147 ************************************ 00:06:29.147 07:57:40 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:29.147 07:57:40 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69672 00:06:29.147 07:57:40 -- event/cpu_locks.sh@81 -- # waitforlisten 69672 /var/tmp/spdk.sock 00:06:29.147 07:57:40 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.147 07:57:40 -- common/autotest_common.sh@829 -- # '[' -z 69672 ']' 00:06:29.147 07:57:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.147 07:57:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.147 07:57:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.147 07:57:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.147 07:57:40 -- common/autotest_common.sh@10 -- # set +x 00:06:29.147 [2024-12-07 07:57:40.239763] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.147 [2024-12-07 07:57:40.240071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69672 ] 00:06:29.147 [2024-12-07 07:57:40.376161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.407 [2024-12-07 07:57:40.449002] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:29.407 [2024-12-07 07:57:40.449176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.975 07:57:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.975 07:57:41 -- common/autotest_common.sh@862 -- # return 0 00:06:29.975 07:57:41 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69700 00:06:29.975 07:57:41 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:29.975 07:57:41 -- event/cpu_locks.sh@85 -- # waitforlisten 69700 /var/tmp/spdk2.sock 00:06:29.975 07:57:41 -- common/autotest_common.sh@829 -- # '[' -z 69700 ']' 00:06:29.975 07:57:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.975 07:57:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.975 07:57:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.975 07:57:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.975 07:57:41 -- common/autotest_common.sh@10 -- # set +x 00:06:30.237 [2024-12-07 07:57:41.281969] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.237 [2024-12-07 07:57:41.282347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69700 ] 00:06:30.237 [2024-12-07 07:57:41.426905] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.237 [2024-12-07 07:57:41.426988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.497 [2024-12-07 07:57:41.557442] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.497 [2024-12-07 07:57:41.557699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.064 07:57:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.064 07:57:42 -- common/autotest_common.sh@862 -- # return 0 00:06:31.064 07:57:42 -- event/cpu_locks.sh@87 -- # locks_exist 69672 00:06:31.064 07:57:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.064 07:57:42 -- event/cpu_locks.sh@22 -- # lslocks -p 69672 00:06:31.632 07:57:42 -- event/cpu_locks.sh@89 -- # killprocess 69672 00:06:31.632 07:57:42 -- common/autotest_common.sh@936 -- # '[' -z 69672 ']' 00:06:31.632 07:57:42 -- common/autotest_common.sh@940 -- # kill -0 69672 00:06:31.632 07:57:42 -- common/autotest_common.sh@941 -- # uname 00:06:31.632 07:57:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:31.632 07:57:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69672 00:06:31.632 killing process with pid 69672 00:06:31.632 07:57:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:31.632 07:57:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:31.632 07:57:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69672' 00:06:31.632 07:57:42 -- common/autotest_common.sh@955 -- # kill 69672 00:06:31.632 07:57:42 -- common/autotest_common.sh@960 -- # wait 69672 00:06:32.200 07:57:43 -- event/cpu_locks.sh@90 -- # killprocess 69700 00:06:32.200 07:57:43 -- common/autotest_common.sh@936 -- # '[' -z 69700 ']' 00:06:32.200 07:57:43 -- common/autotest_common.sh@940 -- # kill -0 69700 00:06:32.200 07:57:43 -- common/autotest_common.sh@941 -- # uname 00:06:32.200 07:57:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:32.200 07:57:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69700 00:06:32.459 killing process with pid 69700 00:06:32.459 07:57:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:32.459 07:57:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:32.459 07:57:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69700' 00:06:32.459 07:57:43 -- common/autotest_common.sh@955 -- # kill 69700 00:06:32.459 07:57:43 -- common/autotest_common.sh@960 -- # wait 69700 00:06:32.717 00:06:32.717 real 0m3.682s 00:06:32.717 user 0m4.047s 00:06:32.717 sys 0m1.024s 00:06:32.717 07:57:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.717 ************************************ 00:06:32.717 END TEST non_locking_app_on_locked_coremask 00:06:32.717 ************************************ 00:06:32.717 07:57:43 -- common/autotest_common.sh@10 -- # set +x 00:06:32.717 07:57:43 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:32.717 07:57:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:32.717 07:57:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.717 07:57:43 -- common/autotest_common.sh@10 -- # set +x 00:06:32.717 ************************************ 00:06:32.717 START TEST locking_app_on_unlocked_coremask 00:06:32.717 ************************************ 00:06:32.717 07:57:43 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:32.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.717 07:57:43 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69772 00:06:32.717 07:57:43 -- event/cpu_locks.sh@99 -- # waitforlisten 69772 /var/tmp/spdk.sock 00:06:32.717 07:57:43 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:32.717 07:57:43 -- common/autotest_common.sh@829 -- # '[' -z 69772 ']' 00:06:32.717 07:57:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.717 07:57:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.717 07:57:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.717 07:57:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.717 07:57:43 -- common/autotest_common.sh@10 -- # set +x 00:06:32.717 [2024-12-07 07:57:43.955347] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.717 [2024-12-07 07:57:43.956078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69772 ] 00:06:32.976 [2024-12-07 07:57:44.097242] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.976 [2024-12-07 07:57:44.097456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.976 [2024-12-07 07:57:44.177181] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:32.976 [2024-12-07 07:57:44.177665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.912 07:57:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.912 07:57:44 -- common/autotest_common.sh@862 -- # return 0 00:06:33.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.912 07:57:44 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69800 00:06:33.912 07:57:44 -- event/cpu_locks.sh@103 -- # waitforlisten 69800 /var/tmp/spdk2.sock 00:06:33.912 07:57:44 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:33.912 07:57:44 -- common/autotest_common.sh@829 -- # '[' -z 69800 ']' 00:06:33.912 07:57:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.912 07:57:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.912 07:57:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.912 07:57:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.912 07:57:44 -- common/autotest_common.sh@10 -- # set +x 00:06:33.912 [2024-12-07 07:57:45.005976] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.912 [2024-12-07 07:57:45.006297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69800 ] 00:06:33.912 [2024-12-07 07:57:45.147860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.171 [2024-12-07 07:57:45.274705] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:34.171 [2024-12-07 07:57:45.274861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.738 07:57:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.738 07:57:45 -- common/autotest_common.sh@862 -- # return 0 00:06:34.738 07:57:45 -- event/cpu_locks.sh@105 -- # locks_exist 69800 00:06:34.738 07:57:45 -- event/cpu_locks.sh@22 -- # lslocks -p 69800 00:06:34.738 07:57:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.674 07:57:46 -- event/cpu_locks.sh@107 -- # killprocess 69772 00:06:35.674 07:57:46 -- common/autotest_common.sh@936 -- # '[' -z 69772 ']' 00:06:35.674 07:57:46 -- common/autotest_common.sh@940 -- # kill -0 69772 00:06:35.674 07:57:46 -- common/autotest_common.sh@941 -- # uname 00:06:35.674 07:57:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:35.674 07:57:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69772 00:06:35.674 killing process with pid 69772 00:06:35.674 07:57:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:35.674 07:57:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:35.674 07:57:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69772' 00:06:35.674 07:57:46 -- common/autotest_common.sh@955 -- # kill 69772 00:06:35.674 07:57:46 -- common/autotest_common.sh@960 -- # wait 69772 00:06:36.241 07:57:47 -- event/cpu_locks.sh@108 -- # killprocess 69800 00:06:36.241 07:57:47 -- common/autotest_common.sh@936 -- # '[' -z 69800 ']' 00:06:36.241 07:57:47 -- common/autotest_common.sh@940 -- # kill -0 69800 00:06:36.241 07:57:47 -- common/autotest_common.sh@941 -- # uname 00:06:36.241 07:57:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:36.241 07:57:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69800 00:06:36.241 killing process with pid 69800 00:06:36.241 07:57:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:36.241 07:57:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:36.241 07:57:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69800' 00:06:36.241 07:57:47 -- common/autotest_common.sh@955 -- # kill 69800 00:06:36.241 07:57:47 -- common/autotest_common.sh@960 -- # wait 69800 00:06:36.809 ************************************ 00:06:36.809 END TEST locking_app_on_unlocked_coremask 00:06:36.809 ************************************ 00:06:36.809 00:06:36.809 real 0m3.905s 00:06:36.809 user 0m4.361s 00:06:36.809 sys 0m1.120s 00:06:36.809 07:57:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.809 07:57:47 -- common/autotest_common.sh@10 -- # set +x 00:06:36.809 07:57:47 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:36.809 07:57:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:36.809 07:57:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.809 07:57:47 -- common/autotest_common.sh@10 -- # set +x 00:06:36.809 ************************************ 00:06:36.809 START TEST locking_app_on_locked_coremask 00:06:36.809 ************************************ 00:06:36.809 07:57:47 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:36.809 07:57:47 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69875 00:06:36.809 07:57:47 -- event/cpu_locks.sh@116 -- # waitforlisten 69875 /var/tmp/spdk.sock 00:06:36.809 07:57:47 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.809 07:57:47 -- common/autotest_common.sh@829 -- # '[' -z 69875 ']' 00:06:36.809 07:57:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.809 07:57:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.809 07:57:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.809 07:57:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.809 07:57:47 -- common/autotest_common.sh@10 -- # set +x 00:06:36.809 [2024-12-07 07:57:47.914338] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.809 [2024-12-07 07:57:47.914448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69875 ] 00:06:36.809 [2024-12-07 07:57:48.053478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.069 [2024-12-07 07:57:48.128694] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:37.069 [2024-12-07 07:57:48.128841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.008 07:57:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.008 07:57:48 -- common/autotest_common.sh@862 -- # return 0 00:06:38.008 07:57:48 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69903 00:06:38.008 07:57:48 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69903 /var/tmp/spdk2.sock 00:06:38.008 07:57:48 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:38.008 07:57:48 -- common/autotest_common.sh@650 -- # local es=0 00:06:38.008 07:57:48 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69903 /var/tmp/spdk2.sock 00:06:38.008 07:57:48 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:38.008 07:57:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.008 07:57:48 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:38.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.008 07:57:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.008 07:57:48 -- common/autotest_common.sh@653 -- # waitforlisten 69903 /var/tmp/spdk2.sock 00:06:38.008 07:57:48 -- common/autotest_common.sh@829 -- # '[' -z 69903 ']' 00:06:38.008 07:57:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.008 07:57:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.008 07:57:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.008 07:57:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.008 07:57:48 -- common/autotest_common.sh@10 -- # set +x 00:06:38.008 [2024-12-07 07:57:48.978537] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.008 [2024-12-07 07:57:48.978631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69903 ] 00:06:38.008 [2024-12-07 07:57:49.121985] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69875 has claimed it. 00:06:38.008 [2024-12-07 07:57:49.122046] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:38.599 ERROR: process (pid: 69903) is no longer running 00:06:38.599 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69903) - No such process 00:06:38.599 07:57:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.599 07:57:49 -- common/autotest_common.sh@862 -- # return 1 00:06:38.599 07:57:49 -- common/autotest_common.sh@653 -- # es=1 00:06:38.599 07:57:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:38.599 07:57:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:38.599 07:57:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:38.599 07:57:49 -- event/cpu_locks.sh@122 -- # locks_exist 69875 00:06:38.599 07:57:49 -- event/cpu_locks.sh@22 -- # lslocks -p 69875 00:06:38.599 07:57:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.862 07:57:49 -- event/cpu_locks.sh@124 -- # killprocess 69875 00:06:38.862 07:57:49 -- common/autotest_common.sh@936 -- # '[' -z 69875 ']' 00:06:38.862 07:57:49 -- common/autotest_common.sh@940 -- # kill -0 69875 00:06:38.862 07:57:49 -- common/autotest_common.sh@941 -- # uname 00:06:38.862 07:57:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:38.862 07:57:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69875 00:06:38.862 killing process with pid 69875 00:06:38.862 07:57:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:38.862 07:57:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:38.862 07:57:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69875' 00:06:38.862 07:57:50 -- common/autotest_common.sh@955 -- # kill 69875 00:06:38.862 07:57:50 -- common/autotest_common.sh@960 -- # wait 69875 00:06:39.124 00:06:39.124 real 0m2.514s 00:06:39.124 user 0m2.965s 00:06:39.124 sys 0m0.573s 00:06:39.124 ************************************ 00:06:39.124 END TEST locking_app_on_locked_coremask 00:06:39.124 ************************************ 00:06:39.124 07:57:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.124 07:57:50 -- common/autotest_common.sh@10 -- # set +x 00:06:39.382 07:57:50 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:39.382 07:57:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:39.382 07:57:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.382 07:57:50 -- common/autotest_common.sh@10 -- # set +x 00:06:39.382 ************************************ 00:06:39.382 START TEST locking_overlapped_coremask 00:06:39.382 ************************************ 00:06:39.382 07:57:50 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:39.382 07:57:50 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69960 00:06:39.382 07:57:50 -- event/cpu_locks.sh@133 -- # waitforlisten 69960 /var/tmp/spdk.sock 00:06:39.382 07:57:50 -- common/autotest_common.sh@829 -- # '[' -z 69960 ']' 00:06:39.382 07:57:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.382 07:57:50 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:39.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.382 07:57:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.382 07:57:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.382 07:57:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.382 07:57:50 -- common/autotest_common.sh@10 -- # set +x 00:06:39.382 [2024-12-07 07:57:50.467231] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.382 [2024-12-07 07:57:50.467337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69960 ] 00:06:39.382 [2024-12-07 07:57:50.601773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.643 [2024-12-07 07:57:50.672810] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:39.643 [2024-12-07 07:57:50.673157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.643 [2024-12-07 07:57:50.673526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.643 [2024-12-07 07:57:50.673537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.210 07:57:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.210 07:57:51 -- common/autotest_common.sh@862 -- # return 0 00:06:40.210 07:57:51 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=69992 00:06:40.210 07:57:51 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:40.211 07:57:51 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 69992 /var/tmp/spdk2.sock 00:06:40.211 07:57:51 -- common/autotest_common.sh@650 -- # local es=0 00:06:40.211 07:57:51 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69992 /var/tmp/spdk2.sock 00:06:40.211 07:57:51 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:40.211 07:57:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.211 07:57:51 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:40.211 07:57:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.211 07:57:51 -- common/autotest_common.sh@653 -- # waitforlisten 69992 /var/tmp/spdk2.sock 00:06:40.211 07:57:51 -- common/autotest_common.sh@829 -- # '[' -z 69992 ']' 00:06:40.211 07:57:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.211 07:57:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.211 07:57:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.211 07:57:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.211 07:57:51 -- common/autotest_common.sh@10 -- # set +x 00:06:40.468 [2024-12-07 07:57:51.503011] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.468 [2024-12-07 07:57:51.503096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69992 ] 00:06:40.468 [2024-12-07 07:57:51.644355] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69960 has claimed it. 00:06:40.468 [2024-12-07 07:57:51.648251] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:41.048 ERROR: process (pid: 69992) is no longer running 00:06:41.048 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69992) - No such process 00:06:41.048 07:57:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.048 07:57:52 -- common/autotest_common.sh@862 -- # return 1 00:06:41.048 07:57:52 -- common/autotest_common.sh@653 -- # es=1 00:06:41.048 07:57:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:41.048 07:57:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:41.048 07:57:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:41.048 07:57:52 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:41.048 07:57:52 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:41.048 07:57:52 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:41.048 07:57:52 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:41.048 07:57:52 -- event/cpu_locks.sh@141 -- # killprocess 69960 00:06:41.048 07:57:52 -- common/autotest_common.sh@936 -- # '[' -z 69960 ']' 00:06:41.048 07:57:52 -- common/autotest_common.sh@940 -- # kill -0 69960 00:06:41.048 07:57:52 -- common/autotest_common.sh@941 -- # uname 00:06:41.048 07:57:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:41.048 07:57:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69960 00:06:41.048 07:57:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:41.048 07:57:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:41.048 07:57:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69960' 00:06:41.048 killing process with pid 69960 00:06:41.048 07:57:52 -- common/autotest_common.sh@955 -- # kill 69960 00:06:41.048 07:57:52 -- common/autotest_common.sh@960 -- # wait 69960 00:06:41.614 00:06:41.614 real 0m2.223s 00:06:41.614 user 0m6.346s 00:06:41.614 sys 0m0.443s 00:06:41.614 07:57:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.614 07:57:52 -- common/autotest_common.sh@10 -- # set +x 00:06:41.614 ************************************ 00:06:41.614 END TEST locking_overlapped_coremask 00:06:41.614 ************************************ 00:06:41.614 07:57:52 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:41.614 07:57:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:41.614 07:57:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.614 07:57:52 -- common/autotest_common.sh@10 -- # set +x 00:06:41.614 ************************************ 00:06:41.614 START TEST locking_overlapped_coremask_via_rpc 00:06:41.614 ************************************ 00:06:41.614 07:57:52 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:41.614 07:57:52 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70038 00:06:41.614 07:57:52 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:41.614 07:57:52 -- event/cpu_locks.sh@149 -- # waitforlisten 70038 /var/tmp/spdk.sock 00:06:41.614 07:57:52 -- common/autotest_common.sh@829 -- # '[' -z 70038 ']' 00:06:41.614 07:57:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.614 07:57:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.614 07:57:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.614 07:57:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.614 07:57:52 -- common/autotest_common.sh@10 -- # set +x 00:06:41.614 [2024-12-07 07:57:52.748405] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:41.614 [2024-12-07 07:57:52.748999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70038 ] 00:06:41.614 [2024-12-07 07:57:52.882421] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.614 [2024-12-07 07:57:52.882460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.872 [2024-12-07 07:57:52.948784] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:41.872 [2024-12-07 07:57:52.949351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.872 [2024-12-07 07:57:52.949476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.872 [2024-12-07 07:57:52.949479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.809 07:57:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.809 07:57:53 -- common/autotest_common.sh@862 -- # return 0 00:06:42.809 07:57:53 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70068 00:06:42.809 07:57:53 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:42.809 07:57:53 -- event/cpu_locks.sh@153 -- # waitforlisten 70068 /var/tmp/spdk2.sock 00:06:42.809 07:57:53 -- common/autotest_common.sh@829 -- # '[' -z 70068 ']' 00:06:42.809 07:57:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.809 07:57:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.809 07:57:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.809 07:57:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.809 07:57:53 -- common/autotest_common.sh@10 -- # set +x 00:06:42.809 [2024-12-07 07:57:53.821219] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.809 [2024-12-07 07:57:53.821502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70068 ] 00:06:42.809 [2024-12-07 07:57:53.961481] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.809 [2024-12-07 07:57:53.961642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.068 [2024-12-07 07:57:54.118549] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:43.068 [2024-12-07 07:57:54.119382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.068 [2024-12-07 07:57:54.119516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:43.068 [2024-12-07 07:57:54.119515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.636 07:57:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.636 07:57:54 -- common/autotest_common.sh@862 -- # return 0 00:06:43.636 07:57:54 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:43.636 07:57:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.636 07:57:54 -- common/autotest_common.sh@10 -- # set +x 00:06:43.636 07:57:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.636 07:57:54 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:43.636 07:57:54 -- common/autotest_common.sh@650 -- # local es=0 00:06:43.636 07:57:54 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:43.636 07:57:54 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:43.636 07:57:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.636 07:57:54 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:43.636 07:57:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.636 07:57:54 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:43.636 07:57:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.636 07:57:54 -- common/autotest_common.sh@10 -- # set +x 00:06:43.636 [2024-12-07 07:57:54.889420] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70038 has claimed it. 00:06:43.636 2024/12/07 07:57:54 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:43.636 request: 00:06:43.636 { 00:06:43.636 "method": "framework_enable_cpumask_locks", 00:06:43.636 "params": {} 00:06:43.636 } 00:06:43.636 Got JSON-RPC error response 00:06:43.636 GoRPCClient: error on JSON-RPC call 00:06:43.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.636 07:57:54 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:43.636 07:57:54 -- common/autotest_common.sh@653 -- # es=1 00:06:43.636 07:57:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:43.636 07:57:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:43.636 07:57:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:43.636 07:57:54 -- event/cpu_locks.sh@158 -- # waitforlisten 70038 /var/tmp/spdk.sock 00:06:43.636 07:57:54 -- common/autotest_common.sh@829 -- # '[' -z 70038 ']' 00:06:43.636 07:57:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.636 07:57:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.636 07:57:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.636 07:57:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.636 07:57:54 -- common/autotest_common.sh@10 -- # set +x 00:06:43.894 07:57:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.894 07:57:55 -- common/autotest_common.sh@862 -- # return 0 00:06:43.894 07:57:55 -- event/cpu_locks.sh@159 -- # waitforlisten 70068 /var/tmp/spdk2.sock 00:06:43.894 07:57:55 -- common/autotest_common.sh@829 -- # '[' -z 70068 ']' 00:06:43.894 07:57:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.894 07:57:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.894 07:57:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.894 07:57:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.894 07:57:55 -- common/autotest_common.sh@10 -- # set +x 00:06:44.460 07:57:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.460 07:57:55 -- common/autotest_common.sh@862 -- # return 0 00:06:44.460 07:57:55 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:44.460 07:57:55 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:44.460 07:57:55 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:44.460 ************************************ 00:06:44.460 END TEST locking_overlapped_coremask_via_rpc 00:06:44.460 ************************************ 00:06:44.460 07:57:55 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:44.460 00:06:44.460 real 0m2.735s 00:06:44.460 user 0m1.412s 00:06:44.460 sys 0m0.242s 00:06:44.460 07:57:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.460 07:57:55 -- common/autotest_common.sh@10 -- # set +x 00:06:44.460 07:57:55 -- event/cpu_locks.sh@174 -- # cleanup 00:06:44.460 07:57:55 -- event/cpu_locks.sh@15 -- # [[ -z 70038 ]] 00:06:44.460 07:57:55 -- event/cpu_locks.sh@15 -- # killprocess 70038 00:06:44.460 07:57:55 -- common/autotest_common.sh@936 -- # '[' -z 70038 ']' 00:06:44.461 07:57:55 -- common/autotest_common.sh@940 -- # kill -0 70038 00:06:44.461 07:57:55 -- common/autotest_common.sh@941 -- # uname 00:06:44.461 07:57:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:44.461 07:57:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70038 00:06:44.461 killing process with pid 70038 00:06:44.461 07:57:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:44.461 07:57:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:44.461 07:57:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70038' 00:06:44.461 07:57:55 -- common/autotest_common.sh@955 -- # kill 70038 00:06:44.461 07:57:55 -- common/autotest_common.sh@960 -- # wait 70038 00:06:44.719 07:57:55 -- event/cpu_locks.sh@16 -- # [[ -z 70068 ]] 00:06:44.719 07:57:55 -- event/cpu_locks.sh@16 -- # killprocess 70068 00:06:44.719 07:57:55 -- common/autotest_common.sh@936 -- # '[' -z 70068 ']' 00:06:44.719 07:57:55 -- common/autotest_common.sh@940 -- # kill -0 70068 00:06:44.719 07:57:55 -- common/autotest_common.sh@941 -- # uname 00:06:44.719 07:57:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:44.719 07:57:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70068 00:06:44.719 killing process with pid 70068 00:06:44.719 07:57:55 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:44.719 07:57:55 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:44.719 07:57:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70068' 00:06:44.719 07:57:55 -- common/autotest_common.sh@955 -- # kill 70068 00:06:44.719 07:57:55 -- common/autotest_common.sh@960 -- # wait 70068 00:06:45.286 07:57:56 -- event/cpu_locks.sh@18 -- # rm -f 00:06:45.286 Process with pid 70038 is not found 00:06:45.286 07:57:56 -- event/cpu_locks.sh@1 -- # cleanup 00:06:45.286 07:57:56 -- event/cpu_locks.sh@15 -- # [[ -z 70038 ]] 00:06:45.286 07:57:56 -- event/cpu_locks.sh@15 -- # killprocess 70038 00:06:45.286 07:57:56 -- common/autotest_common.sh@936 -- # '[' -z 70038 ']' 00:06:45.286 07:57:56 -- common/autotest_common.sh@940 -- # kill -0 70038 00:06:45.286 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70038) - No such process 00:06:45.286 07:57:56 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70038 is not found' 00:06:45.286 07:57:56 -- event/cpu_locks.sh@16 -- # [[ -z 70068 ]] 00:06:45.286 07:57:56 -- event/cpu_locks.sh@16 -- # killprocess 70068 00:06:45.286 07:57:56 -- common/autotest_common.sh@936 -- # '[' -z 70068 ']' 00:06:45.286 07:57:56 -- common/autotest_common.sh@940 -- # kill -0 70068 00:06:45.286 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70068) - No such process 00:06:45.286 Process with pid 70068 is not found 00:06:45.286 07:57:56 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70068 is not found' 00:06:45.286 07:57:56 -- event/cpu_locks.sh@18 -- # rm -f 00:06:45.286 00:06:45.286 real 0m20.405s 00:06:45.286 user 0m36.878s 00:06:45.286 sys 0m5.450s 00:06:45.286 07:57:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.286 ************************************ 00:06:45.286 END TEST cpu_locks 00:06:45.286 ************************************ 00:06:45.286 07:57:56 -- common/autotest_common.sh@10 -- # set +x 00:06:45.286 00:06:45.286 real 0m48.383s 00:06:45.286 user 1m35.219s 00:06:45.286 sys 0m9.031s 00:06:45.286 07:57:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.286 07:57:56 -- common/autotest_common.sh@10 -- # set +x 00:06:45.286 ************************************ 00:06:45.286 END TEST event 00:06:45.286 ************************************ 00:06:45.286 07:57:56 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:45.286 07:57:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:45.286 07:57:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.286 07:57:56 -- common/autotest_common.sh@10 -- # set +x 00:06:45.286 ************************************ 00:06:45.286 START TEST thread 00:06:45.286 ************************************ 00:06:45.286 07:57:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:45.545 * Looking for test storage... 00:06:45.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:45.545 07:57:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:45.545 07:57:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:45.545 07:57:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:45.545 07:57:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:45.545 07:57:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:45.545 07:57:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:45.545 07:57:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:45.545 07:57:56 -- scripts/common.sh@335 -- # IFS=.-: 00:06:45.545 07:57:56 -- scripts/common.sh@335 -- # read -ra ver1 00:06:45.545 07:57:56 -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.545 07:57:56 -- scripts/common.sh@336 -- # read -ra ver2 00:06:45.545 07:57:56 -- scripts/common.sh@337 -- # local 'op=<' 00:06:45.545 07:57:56 -- scripts/common.sh@339 -- # ver1_l=2 00:06:45.545 07:57:56 -- scripts/common.sh@340 -- # ver2_l=1 00:06:45.545 07:57:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:45.545 07:57:56 -- scripts/common.sh@343 -- # case "$op" in 00:06:45.545 07:57:56 -- scripts/common.sh@344 -- # : 1 00:06:45.545 07:57:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:45.545 07:57:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.545 07:57:56 -- scripts/common.sh@364 -- # decimal 1 00:06:45.545 07:57:56 -- scripts/common.sh@352 -- # local d=1 00:06:45.545 07:57:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.545 07:57:56 -- scripts/common.sh@354 -- # echo 1 00:06:45.545 07:57:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:45.545 07:57:56 -- scripts/common.sh@365 -- # decimal 2 00:06:45.545 07:57:56 -- scripts/common.sh@352 -- # local d=2 00:06:45.545 07:57:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.545 07:57:56 -- scripts/common.sh@354 -- # echo 2 00:06:45.545 07:57:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:45.545 07:57:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:45.545 07:57:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:45.545 07:57:56 -- scripts/common.sh@367 -- # return 0 00:06:45.545 07:57:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.545 07:57:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:45.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.545 --rc genhtml_branch_coverage=1 00:06:45.545 --rc genhtml_function_coverage=1 00:06:45.545 --rc genhtml_legend=1 00:06:45.545 --rc geninfo_all_blocks=1 00:06:45.545 --rc geninfo_unexecuted_blocks=1 00:06:45.545 00:06:45.545 ' 00:06:45.545 07:57:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:45.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.545 --rc genhtml_branch_coverage=1 00:06:45.545 --rc genhtml_function_coverage=1 00:06:45.545 --rc genhtml_legend=1 00:06:45.545 --rc geninfo_all_blocks=1 00:06:45.545 --rc geninfo_unexecuted_blocks=1 00:06:45.545 00:06:45.545 ' 00:06:45.545 07:57:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:45.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.545 --rc genhtml_branch_coverage=1 00:06:45.545 --rc genhtml_function_coverage=1 00:06:45.545 --rc genhtml_legend=1 00:06:45.545 --rc geninfo_all_blocks=1 00:06:45.545 --rc geninfo_unexecuted_blocks=1 00:06:45.545 00:06:45.545 ' 00:06:45.545 07:57:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:45.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.545 --rc genhtml_branch_coverage=1 00:06:45.545 --rc genhtml_function_coverage=1 00:06:45.545 --rc genhtml_legend=1 00:06:45.545 --rc geninfo_all_blocks=1 00:06:45.545 --rc geninfo_unexecuted_blocks=1 00:06:45.545 00:06:45.545 ' 00:06:45.545 07:57:56 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:45.545 07:57:56 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:45.545 07:57:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.545 07:57:56 -- common/autotest_common.sh@10 -- # set +x 00:06:45.545 ************************************ 00:06:45.545 START TEST thread_poller_perf 00:06:45.545 ************************************ 00:06:45.545 07:57:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:45.545 [2024-12-07 07:57:56.695466] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.545 [2024-12-07 07:57:56.695557] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70227 ] 00:06:45.545 [2024-12-07 07:57:56.819004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.804 [2024-12-07 07:57:56.876283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.804 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:46.739 [2024-12-07T07:57:58.015Z] ====================================== 00:06:46.739 [2024-12-07T07:57:58.015Z] busy:2206619468 (cyc) 00:06:46.739 [2024-12-07T07:57:58.015Z] total_run_count: 380000 00:06:46.739 [2024-12-07T07:57:58.015Z] tsc_hz: 2200000000 (cyc) 00:06:46.739 [2024-12-07T07:57:58.015Z] ====================================== 00:06:46.739 [2024-12-07T07:57:58.015Z] poller_cost: 5806 (cyc), 2639 (nsec) 00:06:46.739 00:06:46.739 real 0m1.271s 00:06:46.739 user 0m1.112s 00:06:46.739 sys 0m0.052s 00:06:46.739 07:57:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.739 07:57:57 -- common/autotest_common.sh@10 -- # set +x 00:06:46.739 ************************************ 00:06:46.739 END TEST thread_poller_perf 00:06:46.739 ************************************ 00:06:46.739 07:57:57 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:46.739 07:57:57 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:46.739 07:57:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.739 07:57:57 -- common/autotest_common.sh@10 -- # set +x 00:06:46.739 ************************************ 00:06:46.739 START TEST thread_poller_perf 00:06:46.739 ************************************ 00:06:46.739 07:57:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:46.998 [2024-12-07 07:57:58.030825] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.998 [2024-12-07 07:57:58.031534] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70257 ] 00:06:46.998 [2024-12-07 07:57:58.169452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.998 [2024-12-07 07:57:58.232186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.998 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:48.374 [2024-12-07T07:57:59.651Z] ====================================== 00:06:48.375 [2024-12-07T07:57:59.651Z] busy:2202927190 (cyc) 00:06:48.375 [2024-12-07T07:57:59.651Z] total_run_count: 5268000 00:06:48.375 [2024-12-07T07:57:59.651Z] tsc_hz: 2200000000 (cyc) 00:06:48.375 [2024-12-07T07:57:59.651Z] ====================================== 00:06:48.375 [2024-12-07T07:57:59.651Z] poller_cost: 418 (cyc), 190 (nsec) 00:06:48.375 00:06:48.375 real 0m1.273s 00:06:48.375 user 0m1.108s 00:06:48.375 sys 0m0.058s 00:06:48.375 07:57:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.375 07:57:59 -- common/autotest_common.sh@10 -- # set +x 00:06:48.375 ************************************ 00:06:48.375 END TEST thread_poller_perf 00:06:48.375 ************************************ 00:06:48.375 07:57:59 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:48.375 00:06:48.375 real 0m2.803s 00:06:48.375 user 0m2.341s 00:06:48.375 sys 0m0.243s 00:06:48.375 07:57:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.375 07:57:59 -- common/autotest_common.sh@10 -- # set +x 00:06:48.375 ************************************ 00:06:48.375 END TEST thread 00:06:48.375 ************************************ 00:06:48.375 07:57:59 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:48.375 07:57:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:48.375 07:57:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.375 07:57:59 -- common/autotest_common.sh@10 -- # set +x 00:06:48.375 ************************************ 00:06:48.375 START TEST accel 00:06:48.375 ************************************ 00:06:48.375 07:57:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:48.375 * Looking for test storage... 00:06:48.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:48.375 07:57:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:48.375 07:57:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:48.375 07:57:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:48.375 07:57:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:48.375 07:57:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:48.375 07:57:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:48.375 07:57:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:48.375 07:57:59 -- scripts/common.sh@335 -- # IFS=.-: 00:06:48.375 07:57:59 -- scripts/common.sh@335 -- # read -ra ver1 00:06:48.375 07:57:59 -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.375 07:57:59 -- scripts/common.sh@336 -- # read -ra ver2 00:06:48.375 07:57:59 -- scripts/common.sh@337 -- # local 'op=<' 00:06:48.375 07:57:59 -- scripts/common.sh@339 -- # ver1_l=2 00:06:48.375 07:57:59 -- scripts/common.sh@340 -- # ver2_l=1 00:06:48.375 07:57:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:48.375 07:57:59 -- scripts/common.sh@343 -- # case "$op" in 00:06:48.375 07:57:59 -- scripts/common.sh@344 -- # : 1 00:06:48.375 07:57:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:48.375 07:57:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.375 07:57:59 -- scripts/common.sh@364 -- # decimal 1 00:06:48.375 07:57:59 -- scripts/common.sh@352 -- # local d=1 00:06:48.375 07:57:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.375 07:57:59 -- scripts/common.sh@354 -- # echo 1 00:06:48.375 07:57:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:48.375 07:57:59 -- scripts/common.sh@365 -- # decimal 2 00:06:48.375 07:57:59 -- scripts/common.sh@352 -- # local d=2 00:06:48.375 07:57:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.375 07:57:59 -- scripts/common.sh@354 -- # echo 2 00:06:48.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.375 07:57:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:48.375 07:57:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:48.375 07:57:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:48.375 07:57:59 -- scripts/common.sh@367 -- # return 0 00:06:48.375 07:57:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.375 07:57:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:48.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.375 --rc genhtml_branch_coverage=1 00:06:48.375 --rc genhtml_function_coverage=1 00:06:48.375 --rc genhtml_legend=1 00:06:48.375 --rc geninfo_all_blocks=1 00:06:48.375 --rc geninfo_unexecuted_blocks=1 00:06:48.375 00:06:48.375 ' 00:06:48.375 07:57:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:48.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.375 --rc genhtml_branch_coverage=1 00:06:48.375 --rc genhtml_function_coverage=1 00:06:48.375 --rc genhtml_legend=1 00:06:48.375 --rc geninfo_all_blocks=1 00:06:48.375 --rc geninfo_unexecuted_blocks=1 00:06:48.375 00:06:48.375 ' 00:06:48.375 07:57:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:48.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.375 --rc genhtml_branch_coverage=1 00:06:48.375 --rc genhtml_function_coverage=1 00:06:48.375 --rc genhtml_legend=1 00:06:48.375 --rc geninfo_all_blocks=1 00:06:48.375 --rc geninfo_unexecuted_blocks=1 00:06:48.375 00:06:48.375 ' 00:06:48.375 07:57:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:48.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.375 --rc genhtml_branch_coverage=1 00:06:48.375 --rc genhtml_function_coverage=1 00:06:48.375 --rc genhtml_legend=1 00:06:48.375 --rc geninfo_all_blocks=1 00:06:48.375 --rc geninfo_unexecuted_blocks=1 00:06:48.375 00:06:48.375 ' 00:06:48.375 07:57:59 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:48.375 07:57:59 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:48.375 07:57:59 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:48.375 07:57:59 -- accel/accel.sh@59 -- # spdk_tgt_pid=70341 00:06:48.375 07:57:59 -- accel/accel.sh@60 -- # waitforlisten 70341 00:06:48.375 07:57:59 -- common/autotest_common.sh@829 -- # '[' -z 70341 ']' 00:06:48.375 07:57:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.375 07:57:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.375 07:57:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.375 07:57:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.375 07:57:59 -- common/autotest_common.sh@10 -- # set +x 00:06:48.375 07:57:59 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:48.375 07:57:59 -- accel/accel.sh@58 -- # build_accel_config 00:06:48.375 07:57:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.375 07:57:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.375 07:57:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.375 07:57:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.375 07:57:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.375 07:57:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.375 07:57:59 -- accel/accel.sh@42 -- # jq -r . 00:06:48.375 [2024-12-07 07:57:59.603530] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.375 [2024-12-07 07:57:59.603809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70341 ] 00:06:48.634 [2024-12-07 07:57:59.741373] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.634 [2024-12-07 07:57:59.801164] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:48.634 [2024-12-07 07:57:59.801653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.572 07:58:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.572 07:58:00 -- common/autotest_common.sh@862 -- # return 0 00:06:49.572 07:58:00 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:49.572 07:58:00 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:49.572 07:58:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.572 07:58:00 -- common/autotest_common.sh@10 -- # set +x 00:06:49.572 07:58:00 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:49.572 07:58:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.572 07:58:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.572 07:58:00 -- accel/accel.sh@64 -- # IFS== 00:06:49.572 07:58:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.572 07:58:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.572 07:58:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.572 07:58:00 -- accel/accel.sh@64 -- # IFS== 00:06:49.572 07:58:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.572 07:58:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.572 07:58:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.572 07:58:00 -- accel/accel.sh@64 -- # IFS== 00:06:49.572 07:58:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.572 07:58:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.572 07:58:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.572 07:58:00 -- accel/accel.sh@64 -- # IFS== 00:06:49.572 07:58:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.572 07:58:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.572 07:58:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.572 07:58:00 -- accel/accel.sh@64 -- # IFS== 00:06:49.572 07:58:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.572 07:58:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.572 07:58:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.572 07:58:00 -- accel/accel.sh@64 -- # IFS== 00:06:49.572 07:58:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.572 07:58:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.572 07:58:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.572 07:58:00 -- accel/accel.sh@64 -- # IFS== 00:06:49.572 07:58:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.572 07:58:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.572 07:58:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.572 07:58:00 -- accel/accel.sh@64 -- # IFS== 00:06:49.572 07:58:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.572 07:58:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.572 07:58:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.572 07:58:00 -- accel/accel.sh@64 -- # IFS== 00:06:49.572 07:58:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.573 07:58:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.573 07:58:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.573 07:58:00 -- accel/accel.sh@64 -- # IFS== 00:06:49.573 07:58:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.573 07:58:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.573 07:58:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.573 07:58:00 -- accel/accel.sh@64 -- # IFS== 00:06:49.573 07:58:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.573 07:58:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.573 07:58:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.573 07:58:00 -- accel/accel.sh@64 -- # IFS== 00:06:49.573 07:58:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.573 07:58:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.573 07:58:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.573 07:58:00 -- accel/accel.sh@64 -- # IFS== 00:06:49.573 07:58:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.573 07:58:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.573 07:58:00 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.573 07:58:00 -- accel/accel.sh@64 -- # IFS== 00:06:49.573 07:58:00 -- accel/accel.sh@64 -- # read -r opc module 00:06:49.573 07:58:00 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:49.573 07:58:00 -- accel/accel.sh@67 -- # killprocess 70341 00:06:49.573 07:58:00 -- common/autotest_common.sh@936 -- # '[' -z 70341 ']' 00:06:49.573 07:58:00 -- common/autotest_common.sh@940 -- # kill -0 70341 00:06:49.573 07:58:00 -- common/autotest_common.sh@941 -- # uname 00:06:49.573 07:58:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:49.573 07:58:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70341 00:06:49.573 07:58:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:49.573 07:58:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:49.573 07:58:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70341' 00:06:49.573 killing process with pid 70341 00:06:49.573 07:58:00 -- common/autotest_common.sh@955 -- # kill 70341 00:06:49.573 07:58:00 -- common/autotest_common.sh@960 -- # wait 70341 00:06:49.831 07:58:01 -- accel/accel.sh@68 -- # trap - ERR 00:06:49.831 07:58:01 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:49.831 07:58:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:49.831 07:58:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.831 07:58:01 -- common/autotest_common.sh@10 -- # set +x 00:06:49.831 07:58:01 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:49.831 07:58:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:49.831 07:58:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.831 07:58:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.831 07:58:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.831 07:58:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.831 07:58:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.831 07:58:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.831 07:58:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.831 07:58:01 -- accel/accel.sh@42 -- # jq -r . 00:06:49.831 07:58:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.831 07:58:01 -- common/autotest_common.sh@10 -- # set +x 00:06:50.090 07:58:01 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:50.090 07:58:01 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:50.090 07:58:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.090 07:58:01 -- common/autotest_common.sh@10 -- # set +x 00:06:50.090 ************************************ 00:06:50.090 START TEST accel_missing_filename 00:06:50.090 ************************************ 00:06:50.090 07:58:01 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:50.090 07:58:01 -- common/autotest_common.sh@650 -- # local es=0 00:06:50.090 07:58:01 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:50.090 07:58:01 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:50.090 07:58:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.090 07:58:01 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:50.090 07:58:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.090 07:58:01 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:50.090 07:58:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:50.090 07:58:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.090 07:58:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.090 07:58:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.090 07:58:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.090 07:58:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.090 07:58:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.090 07:58:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.090 07:58:01 -- accel/accel.sh@42 -- # jq -r . 00:06:50.090 [2024-12-07 07:58:01.171992] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.090 [2024-12-07 07:58:01.172092] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70409 ] 00:06:50.090 [2024-12-07 07:58:01.309172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.350 [2024-12-07 07:58:01.369629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.350 [2024-12-07 07:58:01.421882] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.350 [2024-12-07 07:58:01.492189] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:50.350 A filename is required. 00:06:50.350 07:58:01 -- common/autotest_common.sh@653 -- # es=234 00:06:50.350 07:58:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.350 07:58:01 -- common/autotest_common.sh@662 -- # es=106 00:06:50.350 07:58:01 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:50.350 07:58:01 -- common/autotest_common.sh@670 -- # es=1 00:06:50.350 07:58:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.350 00:06:50.350 real 0m0.428s 00:06:50.350 user 0m0.265s 00:06:50.350 sys 0m0.111s 00:06:50.350 07:58:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.350 ************************************ 00:06:50.350 END TEST accel_missing_filename 00:06:50.350 ************************************ 00:06:50.350 07:58:01 -- common/autotest_common.sh@10 -- # set +x 00:06:50.350 07:58:01 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:50.350 07:58:01 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:50.350 07:58:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.350 07:58:01 -- common/autotest_common.sh@10 -- # set +x 00:06:50.609 ************************************ 00:06:50.609 START TEST accel_compress_verify 00:06:50.609 ************************************ 00:06:50.609 07:58:01 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:50.609 07:58:01 -- common/autotest_common.sh@650 -- # local es=0 00:06:50.609 07:58:01 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:50.609 07:58:01 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:50.609 07:58:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.609 07:58:01 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:50.609 07:58:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.609 07:58:01 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:50.609 07:58:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:50.609 07:58:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.609 07:58:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.609 07:58:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.609 07:58:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.609 07:58:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.609 07:58:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.609 07:58:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.609 07:58:01 -- accel/accel.sh@42 -- # jq -r . 00:06:50.609 [2024-12-07 07:58:01.647895] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.609 [2024-12-07 07:58:01.647997] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70438 ] 00:06:50.609 [2024-12-07 07:58:01.786626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.609 [2024-12-07 07:58:01.855638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.869 [2024-12-07 07:58:01.917333] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.869 [2024-12-07 07:58:01.991560] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:50.869 00:06:50.869 Compression does not support the verify option, aborting. 00:06:50.869 07:58:02 -- common/autotest_common.sh@653 -- # es=161 00:06:50.869 07:58:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.869 07:58:02 -- common/autotest_common.sh@662 -- # es=33 00:06:50.869 07:58:02 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:50.869 07:58:02 -- common/autotest_common.sh@670 -- # es=1 00:06:50.869 07:58:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.869 00:06:50.869 real 0m0.423s 00:06:50.869 user 0m0.247s 00:06:50.869 sys 0m0.122s 00:06:50.869 ************************************ 00:06:50.869 END TEST accel_compress_verify 00:06:50.869 ************************************ 00:06:50.869 07:58:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.869 07:58:02 -- common/autotest_common.sh@10 -- # set +x 00:06:50.869 07:58:02 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:50.869 07:58:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:50.869 07:58:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.869 07:58:02 -- common/autotest_common.sh@10 -- # set +x 00:06:50.869 ************************************ 00:06:50.869 START TEST accel_wrong_workload 00:06:50.869 ************************************ 00:06:50.869 07:58:02 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:50.869 07:58:02 -- common/autotest_common.sh@650 -- # local es=0 00:06:50.869 07:58:02 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:50.869 07:58:02 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:50.869 07:58:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.869 07:58:02 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:50.869 07:58:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.869 07:58:02 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:50.869 07:58:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:50.869 07:58:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.869 07:58:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.869 07:58:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.869 07:58:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.869 07:58:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.869 07:58:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.869 07:58:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.869 07:58:02 -- accel/accel.sh@42 -- # jq -r . 00:06:50.869 Unsupported workload type: foobar 00:06:50.869 [2024-12-07 07:58:02.124387] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:50.869 accel_perf options: 00:06:50.869 [-h help message] 00:06:50.869 [-q queue depth per core] 00:06:50.869 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:50.869 [-T number of threads per core 00:06:50.869 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:50.869 [-t time in seconds] 00:06:50.869 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:50.869 [ dif_verify, , dif_generate, dif_generate_copy 00:06:50.869 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:50.869 [-l for compress/decompress workloads, name of uncompressed input file 00:06:50.869 [-S for crc32c workload, use this seed value (default 0) 00:06:50.869 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:50.869 [-f for fill workload, use this BYTE value (default 255) 00:06:50.869 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:50.869 [-y verify result if this switch is on] 00:06:50.869 [-a tasks to allocate per core (default: same value as -q)] 00:06:50.869 Can be used to spread operations across a wider range of memory. 00:06:50.869 07:58:02 -- common/autotest_common.sh@653 -- # es=1 00:06:50.869 07:58:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.869 07:58:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:50.870 07:58:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.870 00:06:50.870 real 0m0.032s 00:06:50.870 user 0m0.020s 00:06:50.870 sys 0m0.012s 00:06:50.870 ************************************ 00:06:50.870 END TEST accel_wrong_workload 00:06:50.870 ************************************ 00:06:50.870 07:58:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.870 07:58:02 -- common/autotest_common.sh@10 -- # set +x 00:06:51.129 07:58:02 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:51.129 07:58:02 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:51.129 07:58:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.129 07:58:02 -- common/autotest_common.sh@10 -- # set +x 00:06:51.129 ************************************ 00:06:51.129 START TEST accel_negative_buffers 00:06:51.129 ************************************ 00:06:51.129 07:58:02 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:51.129 07:58:02 -- common/autotest_common.sh@650 -- # local es=0 00:06:51.129 07:58:02 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:51.129 07:58:02 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:51.129 07:58:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.129 07:58:02 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:51.129 07:58:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.129 07:58:02 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:51.129 07:58:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:51.129 07:58:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.129 07:58:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.129 07:58:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.129 07:58:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.129 07:58:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.129 07:58:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.129 07:58:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.129 07:58:02 -- accel/accel.sh@42 -- # jq -r . 00:06:51.129 -x option must be non-negative. 00:06:51.129 [2024-12-07 07:58:02.205831] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:51.129 accel_perf options: 00:06:51.129 [-h help message] 00:06:51.129 [-q queue depth per core] 00:06:51.129 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:51.129 [-T number of threads per core 00:06:51.129 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:51.129 [-t time in seconds] 00:06:51.129 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:51.129 [ dif_verify, , dif_generate, dif_generate_copy 00:06:51.129 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:51.129 [-l for compress/decompress workloads, name of uncompressed input file 00:06:51.129 [-S for crc32c workload, use this seed value (default 0) 00:06:51.129 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:51.129 [-f for fill workload, use this BYTE value (default 255) 00:06:51.129 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:51.129 [-y verify result if this switch is on] 00:06:51.129 [-a tasks to allocate per core (default: same value as -q)] 00:06:51.129 Can be used to spread operations across a wider range of memory. 00:06:51.129 ************************************ 00:06:51.129 END TEST accel_negative_buffers 00:06:51.129 ************************************ 00:06:51.129 07:58:02 -- common/autotest_common.sh@653 -- # es=1 00:06:51.129 07:58:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:51.129 07:58:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:51.129 07:58:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:51.129 00:06:51.129 real 0m0.028s 00:06:51.129 user 0m0.018s 00:06:51.129 sys 0m0.010s 00:06:51.129 07:58:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.129 07:58:02 -- common/autotest_common.sh@10 -- # set +x 00:06:51.129 07:58:02 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:51.129 07:58:02 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:51.129 07:58:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.129 07:58:02 -- common/autotest_common.sh@10 -- # set +x 00:06:51.129 ************************************ 00:06:51.129 START TEST accel_crc32c 00:06:51.129 ************************************ 00:06:51.129 07:58:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:51.129 07:58:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.129 07:58:02 -- accel/accel.sh@17 -- # local accel_module 00:06:51.129 07:58:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:51.129 07:58:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:51.129 07:58:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.129 07:58:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.129 07:58:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.129 07:58:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.129 07:58:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.129 07:58:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.129 07:58:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.129 07:58:02 -- accel/accel.sh@42 -- # jq -r . 00:06:51.129 [2024-12-07 07:58:02.285385] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.130 [2024-12-07 07:58:02.285469] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70492 ] 00:06:51.389 [2024-12-07 07:58:02.422478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.389 [2024-12-07 07:58:02.480409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.766 07:58:03 -- accel/accel.sh@18 -- # out=' 00:06:52.766 SPDK Configuration: 00:06:52.766 Core mask: 0x1 00:06:52.766 00:06:52.766 Accel Perf Configuration: 00:06:52.766 Workload Type: crc32c 00:06:52.766 CRC-32C seed: 32 00:06:52.766 Transfer size: 4096 bytes 00:06:52.766 Vector count 1 00:06:52.766 Module: software 00:06:52.766 Queue depth: 32 00:06:52.766 Allocate depth: 32 00:06:52.766 # threads/core: 1 00:06:52.766 Run time: 1 seconds 00:06:52.766 Verify: Yes 00:06:52.766 00:06:52.766 Running for 1 seconds... 00:06:52.766 00:06:52.766 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.767 ------------------------------------------------------------------------------------ 00:06:52.767 0,0 560448/s 2189 MiB/s 0 0 00:06:52.767 ==================================================================================== 00:06:52.767 Total 560448/s 2189 MiB/s 0 0' 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.767 07:58:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.767 07:58:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:52.767 07:58:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.767 07:58:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.767 07:58:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.767 07:58:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.767 07:58:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.767 07:58:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.767 07:58:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.767 07:58:03 -- accel/accel.sh@42 -- # jq -r . 00:06:52.767 [2024-12-07 07:58:03.691439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.767 [2024-12-07 07:58:03.691543] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70512 ] 00:06:52.767 [2024-12-07 07:58:03.829164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.767 [2024-12-07 07:58:03.885435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.767 07:58:03 -- accel/accel.sh@21 -- # val= 00:06:52.767 07:58:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.767 07:58:03 -- accel/accel.sh@21 -- # val= 00:06:52.767 07:58:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.767 07:58:03 -- accel/accel.sh@21 -- # val=0x1 00:06:52.767 07:58:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.767 07:58:03 -- accel/accel.sh@21 -- # val= 00:06:52.767 07:58:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.767 07:58:03 -- accel/accel.sh@21 -- # val= 00:06:52.767 07:58:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.767 07:58:03 -- accel/accel.sh@21 -- # val=crc32c 00:06:52.767 07:58:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.767 07:58:03 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.767 07:58:03 -- accel/accel.sh@21 -- # val=32 00:06:52.767 07:58:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.767 07:58:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.767 07:58:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.767 07:58:03 -- accel/accel.sh@21 -- # val= 00:06:52.767 07:58:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.767 07:58:03 -- accel/accel.sh@21 -- # val=software 00:06:52.767 07:58:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.767 07:58:03 -- accel/accel.sh@23 -- # accel_module=software 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.767 07:58:03 -- accel/accel.sh@21 -- # val=32 00:06:52.767 07:58:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.767 07:58:03 -- accel/accel.sh@21 -- # val=32 00:06:52.767 07:58:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.767 07:58:03 -- accel/accel.sh@21 -- # val=1 00:06:52.767 07:58:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.767 07:58:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:52.767 07:58:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.767 07:58:03 -- accel/accel.sh@21 -- # val=Yes 00:06:52.767 07:58:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.767 07:58:03 -- accel/accel.sh@21 -- # val= 00:06:52.767 07:58:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.767 07:58:03 -- accel/accel.sh@21 -- # val= 00:06:52.767 07:58:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.767 07:58:03 -- accel/accel.sh@20 -- # read -r var val 00:06:54.146 07:58:05 -- accel/accel.sh@21 -- # val= 00:06:54.146 07:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.146 07:58:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.146 07:58:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.146 07:58:05 -- accel/accel.sh@21 -- # val= 00:06:54.146 07:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.146 07:58:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.146 07:58:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.146 07:58:05 -- accel/accel.sh@21 -- # val= 00:06:54.146 07:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.146 07:58:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.146 07:58:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.146 07:58:05 -- accel/accel.sh@21 -- # val= 00:06:54.146 07:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.146 07:58:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.146 07:58:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.146 07:58:05 -- accel/accel.sh@21 -- # val= 00:06:54.146 ************************************ 00:06:54.146 END TEST accel_crc32c 00:06:54.146 ************************************ 00:06:54.146 07:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.146 07:58:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.146 07:58:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.146 07:58:05 -- accel/accel.sh@21 -- # val= 00:06:54.146 07:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.146 07:58:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.146 07:58:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.146 07:58:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:54.146 07:58:05 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:54.146 07:58:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.146 00:06:54.146 real 0m2.820s 00:06:54.146 user 0m2.389s 00:06:54.146 sys 0m0.231s 00:06:54.146 07:58:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:54.146 07:58:05 -- common/autotest_common.sh@10 -- # set +x 00:06:54.146 07:58:05 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:54.147 07:58:05 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:54.147 07:58:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.147 07:58:05 -- common/autotest_common.sh@10 -- # set +x 00:06:54.147 ************************************ 00:06:54.147 START TEST accel_crc32c_C2 00:06:54.147 ************************************ 00:06:54.147 07:58:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:54.147 07:58:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.147 07:58:05 -- accel/accel.sh@17 -- # local accel_module 00:06:54.147 07:58:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:54.147 07:58:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:54.147 07:58:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.147 07:58:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.147 07:58:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.147 07:58:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.147 07:58:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.147 07:58:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.147 07:58:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.147 07:58:05 -- accel/accel.sh@42 -- # jq -r . 00:06:54.147 [2024-12-07 07:58:05.148013] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.147 [2024-12-07 07:58:05.148099] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70546 ] 00:06:54.147 [2024-12-07 07:58:05.270942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.147 [2024-12-07 07:58:05.327876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.523 07:58:06 -- accel/accel.sh@18 -- # out=' 00:06:55.523 SPDK Configuration: 00:06:55.523 Core mask: 0x1 00:06:55.523 00:06:55.523 Accel Perf Configuration: 00:06:55.523 Workload Type: crc32c 00:06:55.523 CRC-32C seed: 0 00:06:55.524 Transfer size: 4096 bytes 00:06:55.524 Vector count 2 00:06:55.524 Module: software 00:06:55.524 Queue depth: 32 00:06:55.524 Allocate depth: 32 00:06:55.524 # threads/core: 1 00:06:55.524 Run time: 1 seconds 00:06:55.524 Verify: Yes 00:06:55.524 00:06:55.524 Running for 1 seconds... 00:06:55.524 00:06:55.524 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.524 ------------------------------------------------------------------------------------ 00:06:55.524 0,0 424768/s 3318 MiB/s 0 0 00:06:55.524 ==================================================================================== 00:06:55.524 Total 424768/s 1659 MiB/s 0 0' 00:06:55.524 07:58:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.524 07:58:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:55.524 07:58:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.524 07:58:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:55.524 07:58:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.524 07:58:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.524 07:58:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.524 07:58:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.524 07:58:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.524 07:58:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.524 07:58:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.524 07:58:06 -- accel/accel.sh@42 -- # jq -r . 00:06:55.524 [2024-12-07 07:58:06.538034] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.524 [2024-12-07 07:58:06.538304] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70560 ] 00:06:55.524 [2024-12-07 07:58:06.675990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.524 [2024-12-07 07:58:06.737482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.524 07:58:06 -- accel/accel.sh@21 -- # val= 00:06:55.524 07:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.524 07:58:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.524 07:58:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.524 07:58:06 -- accel/accel.sh@21 -- # val= 00:06:55.524 07:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.524 07:58:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.524 07:58:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.524 07:58:06 -- accel/accel.sh@21 -- # val=0x1 00:06:55.524 07:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.524 07:58:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.524 07:58:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.524 07:58:06 -- accel/accel.sh@21 -- # val= 00:06:55.524 07:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.524 07:58:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.524 07:58:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.524 07:58:06 -- accel/accel.sh@21 -- # val= 00:06:55.524 07:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.524 07:58:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.524 07:58:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.524 07:58:06 -- accel/accel.sh@21 -- # val=crc32c 00:06:55.782 07:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 07:58:06 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:55.782 07:58:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 07:58:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 07:58:06 -- accel/accel.sh@21 -- # val=0 00:06:55.782 07:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 07:58:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 07:58:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 07:58:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.782 07:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 07:58:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 07:58:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 07:58:06 -- accel/accel.sh@21 -- # val= 00:06:55.782 07:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 07:58:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 07:58:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 07:58:06 -- accel/accel.sh@21 -- # val=software 00:06:55.782 07:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 07:58:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.782 07:58:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 07:58:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 07:58:06 -- accel/accel.sh@21 -- # val=32 00:06:55.782 07:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 07:58:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 07:58:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 07:58:06 -- accel/accel.sh@21 -- # val=32 00:06:55.782 07:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 07:58:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 07:58:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.782 07:58:06 -- accel/accel.sh@21 -- # val=1 00:06:55.782 07:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.782 07:58:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.782 07:58:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.783 07:58:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.783 07:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.783 07:58:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.783 07:58:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.783 07:58:06 -- accel/accel.sh@21 -- # val=Yes 00:06:55.783 07:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.783 07:58:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.783 07:58:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.783 07:58:06 -- accel/accel.sh@21 -- # val= 00:06:55.783 07:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.783 07:58:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.783 07:58:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.783 07:58:06 -- accel/accel.sh@21 -- # val= 00:06:55.783 07:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.783 07:58:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.783 07:58:06 -- accel/accel.sh@20 -- # read -r var val 00:06:56.719 07:58:07 -- accel/accel.sh@21 -- # val= 00:06:56.719 07:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.719 07:58:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.719 07:58:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.719 07:58:07 -- accel/accel.sh@21 -- # val= 00:06:56.719 07:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.719 07:58:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.719 07:58:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.719 07:58:07 -- accel/accel.sh@21 -- # val= 00:06:56.719 07:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.719 07:58:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.719 07:58:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.719 07:58:07 -- accel/accel.sh@21 -- # val= 00:06:56.719 07:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.719 07:58:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.719 07:58:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.719 07:58:07 -- accel/accel.sh@21 -- # val= 00:06:56.719 07:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.719 07:58:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.719 07:58:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.719 07:58:07 -- accel/accel.sh@21 -- # val= 00:06:56.719 07:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.719 07:58:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.719 07:58:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.719 07:58:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.719 07:58:07 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:56.719 07:58:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.719 00:06:56.719 real 0m2.803s 00:06:56.719 user 0m2.389s 00:06:56.719 sys 0m0.215s 00:06:56.719 07:58:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:56.719 07:58:07 -- common/autotest_common.sh@10 -- # set +x 00:06:56.719 ************************************ 00:06:56.719 END TEST accel_crc32c_C2 00:06:56.719 ************************************ 00:06:56.719 07:58:07 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:56.719 07:58:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:56.719 07:58:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.719 07:58:07 -- common/autotest_common.sh@10 -- # set +x 00:06:56.719 ************************************ 00:06:56.719 START TEST accel_copy 00:06:56.719 ************************************ 00:06:56.719 07:58:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:56.719 07:58:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.719 07:58:07 -- accel/accel.sh@17 -- # local accel_module 00:06:56.719 07:58:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:56.719 07:58:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.719 07:58:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:56.719 07:58:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.719 07:58:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.719 07:58:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.719 07:58:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.719 07:58:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.719 07:58:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.719 07:58:07 -- accel/accel.sh@42 -- # jq -r . 00:06:56.978 [2024-12-07 07:58:08.004521] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.978 [2024-12-07 07:58:08.004615] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70600 ] 00:06:56.978 [2024-12-07 07:58:08.139710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.978 [2024-12-07 07:58:08.201265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.353 07:58:09 -- accel/accel.sh@18 -- # out=' 00:06:58.354 SPDK Configuration: 00:06:58.354 Core mask: 0x1 00:06:58.354 00:06:58.354 Accel Perf Configuration: 00:06:58.354 Workload Type: copy 00:06:58.354 Transfer size: 4096 bytes 00:06:58.354 Vector count 1 00:06:58.354 Module: software 00:06:58.354 Queue depth: 32 00:06:58.354 Allocate depth: 32 00:06:58.354 # threads/core: 1 00:06:58.354 Run time: 1 seconds 00:06:58.354 Verify: Yes 00:06:58.354 00:06:58.354 Running for 1 seconds... 00:06:58.354 00:06:58.354 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.354 ------------------------------------------------------------------------------------ 00:06:58.354 0,0 389280/s 1520 MiB/s 0 0 00:06:58.354 ==================================================================================== 00:06:58.354 Total 389280/s 1520 MiB/s 0 0' 00:06:58.354 07:58:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.354 07:58:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.354 07:58:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:58.354 07:58:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:58.354 07:58:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.354 07:58:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.354 07:58:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.354 07:58:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.354 07:58:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.354 07:58:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.354 07:58:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.354 07:58:09 -- accel/accel.sh@42 -- # jq -r . 00:06:58.354 [2024-12-07 07:58:09.412653] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.354 [2024-12-07 07:58:09.412750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70616 ] 00:06:58.354 [2024-12-07 07:58:09.548282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.354 [2024-12-07 07:58:09.599618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.613 07:58:09 -- accel/accel.sh@21 -- # val= 00:06:58.613 07:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.613 07:58:09 -- accel/accel.sh@21 -- # val= 00:06:58.613 07:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.613 07:58:09 -- accel/accel.sh@21 -- # val=0x1 00:06:58.613 07:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.613 07:58:09 -- accel/accel.sh@21 -- # val= 00:06:58.613 07:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.613 07:58:09 -- accel/accel.sh@21 -- # val= 00:06:58.613 07:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.613 07:58:09 -- accel/accel.sh@21 -- # val=copy 00:06:58.613 07:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.613 07:58:09 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.613 07:58:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.613 07:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.613 07:58:09 -- accel/accel.sh@21 -- # val= 00:06:58.613 07:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.613 07:58:09 -- accel/accel.sh@21 -- # val=software 00:06:58.613 07:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.613 07:58:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.613 07:58:09 -- accel/accel.sh@21 -- # val=32 00:06:58.613 07:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.613 07:58:09 -- accel/accel.sh@21 -- # val=32 00:06:58.613 07:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.613 07:58:09 -- accel/accel.sh@21 -- # val=1 00:06:58.613 07:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.613 07:58:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.613 07:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.613 07:58:09 -- accel/accel.sh@21 -- # val=Yes 00:06:58.613 07:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.613 07:58:09 -- accel/accel.sh@21 -- # val= 00:06:58.613 07:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.613 07:58:09 -- accel/accel.sh@21 -- # val= 00:06:58.613 07:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.613 07:58:09 -- accel/accel.sh@20 -- # read -r var val 00:06:59.559 07:58:10 -- accel/accel.sh@21 -- # val= 00:06:59.559 07:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.559 07:58:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.559 07:58:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.559 07:58:10 -- accel/accel.sh@21 -- # val= 00:06:59.559 07:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.559 07:58:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.559 07:58:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.559 07:58:10 -- accel/accel.sh@21 -- # val= 00:06:59.559 07:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.559 07:58:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.559 07:58:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.559 07:58:10 -- accel/accel.sh@21 -- # val= 00:06:59.559 07:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.559 07:58:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.559 07:58:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.559 07:58:10 -- accel/accel.sh@21 -- # val= 00:06:59.559 07:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.559 07:58:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.559 07:58:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.559 07:58:10 -- accel/accel.sh@21 -- # val= 00:06:59.559 07:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.559 07:58:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.559 07:58:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.559 07:58:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.559 07:58:10 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:59.559 07:58:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.559 00:06:59.559 real 0m2.808s 00:06:59.559 user 0m2.392s 00:06:59.559 sys 0m0.212s 00:06:59.559 07:58:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.559 ************************************ 00:06:59.559 END TEST accel_copy 00:06:59.559 ************************************ 00:06:59.559 07:58:10 -- common/autotest_common.sh@10 -- # set +x 00:06:59.848 07:58:10 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:59.848 07:58:10 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:59.848 07:58:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.848 07:58:10 -- common/autotest_common.sh@10 -- # set +x 00:06:59.848 ************************************ 00:06:59.848 START TEST accel_fill 00:06:59.848 ************************************ 00:06:59.848 07:58:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:59.848 07:58:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.848 07:58:10 -- accel/accel.sh@17 -- # local accel_module 00:06:59.848 07:58:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:59.848 07:58:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:59.848 07:58:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.848 07:58:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.848 07:58:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.848 07:58:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.848 07:58:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.848 07:58:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.848 07:58:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.848 07:58:10 -- accel/accel.sh@42 -- # jq -r . 00:06:59.848 [2024-12-07 07:58:10.865703] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.848 [2024-12-07 07:58:10.865802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70651 ] 00:06:59.848 [2024-12-07 07:58:11.005498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.848 [2024-12-07 07:58:11.089288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.238 07:58:12 -- accel/accel.sh@18 -- # out=' 00:07:01.238 SPDK Configuration: 00:07:01.238 Core mask: 0x1 00:07:01.238 00:07:01.238 Accel Perf Configuration: 00:07:01.238 Workload Type: fill 00:07:01.238 Fill pattern: 0x80 00:07:01.238 Transfer size: 4096 bytes 00:07:01.238 Vector count 1 00:07:01.238 Module: software 00:07:01.238 Queue depth: 64 00:07:01.238 Allocate depth: 64 00:07:01.238 # threads/core: 1 00:07:01.238 Run time: 1 seconds 00:07:01.238 Verify: Yes 00:07:01.238 00:07:01.238 Running for 1 seconds... 00:07:01.238 00:07:01.238 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.238 ------------------------------------------------------------------------------------ 00:07:01.238 0,0 562880/s 2198 MiB/s 0 0 00:07:01.238 ==================================================================================== 00:07:01.238 Total 562880/s 2198 MiB/s 0 0' 00:07:01.238 07:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.238 07:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.238 07:58:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.238 07:58:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.238 07:58:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.238 07:58:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.238 07:58:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.238 07:58:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.238 07:58:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.238 07:58:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.238 07:58:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.238 07:58:12 -- accel/accel.sh@42 -- # jq -r . 00:07:01.238 [2024-12-07 07:58:12.322798] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.238 [2024-12-07 07:58:12.323526] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70670 ] 00:07:01.238 [2024-12-07 07:58:12.460592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.496 [2024-12-07 07:58:12.516737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.496 07:58:12 -- accel/accel.sh@21 -- # val= 00:07:01.496 07:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.496 07:58:12 -- accel/accel.sh@21 -- # val= 00:07:01.496 07:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.496 07:58:12 -- accel/accel.sh@21 -- # val=0x1 00:07:01.496 07:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.496 07:58:12 -- accel/accel.sh@21 -- # val= 00:07:01.496 07:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.496 07:58:12 -- accel/accel.sh@21 -- # val= 00:07:01.496 07:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.496 07:58:12 -- accel/accel.sh@21 -- # val=fill 00:07:01.496 07:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.496 07:58:12 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.496 07:58:12 -- accel/accel.sh@21 -- # val=0x80 00:07:01.496 07:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.496 07:58:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.496 07:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.496 07:58:12 -- accel/accel.sh@21 -- # val= 00:07:01.496 07:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.496 07:58:12 -- accel/accel.sh@21 -- # val=software 00:07:01.496 07:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.496 07:58:12 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.496 07:58:12 -- accel/accel.sh@21 -- # val=64 00:07:01.496 07:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.496 07:58:12 -- accel/accel.sh@21 -- # val=64 00:07:01.496 07:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.496 07:58:12 -- accel/accel.sh@21 -- # val=1 00:07:01.496 07:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.496 07:58:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.496 07:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.496 07:58:12 -- accel/accel.sh@21 -- # val=Yes 00:07:01.496 07:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.496 07:58:12 -- accel/accel.sh@21 -- # val= 00:07:01.496 07:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:01.496 07:58:12 -- accel/accel.sh@21 -- # val= 00:07:01.496 07:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:01.496 07:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:02.430 07:58:13 -- accel/accel.sh@21 -- # val= 00:07:02.430 07:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.430 07:58:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.430 07:58:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.430 07:58:13 -- accel/accel.sh@21 -- # val= 00:07:02.430 07:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.430 07:58:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.430 07:58:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.430 07:58:13 -- accel/accel.sh@21 -- # val= 00:07:02.430 07:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.430 07:58:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.430 07:58:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.430 07:58:13 -- accel/accel.sh@21 -- # val= 00:07:02.430 07:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.430 07:58:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.430 07:58:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.430 07:58:13 -- accel/accel.sh@21 -- # val= 00:07:02.430 07:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.430 07:58:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.430 07:58:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.430 07:58:13 -- accel/accel.sh@21 -- # val= 00:07:02.430 07:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.430 07:58:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.430 07:58:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.430 07:58:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:02.430 07:58:13 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:02.430 07:58:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.430 00:07:02.430 real 0m2.859s 00:07:02.430 user 0m2.421s 00:07:02.430 sys 0m0.235s 00:07:02.430 07:58:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.430 07:58:13 -- common/autotest_common.sh@10 -- # set +x 00:07:02.430 ************************************ 00:07:02.430 END TEST accel_fill 00:07:02.430 ************************************ 00:07:02.689 07:58:13 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:02.689 07:58:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:02.689 07:58:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.689 07:58:13 -- common/autotest_common.sh@10 -- # set +x 00:07:02.689 ************************************ 00:07:02.689 START TEST accel_copy_crc32c 00:07:02.690 ************************************ 00:07:02.690 07:58:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:07:02.690 07:58:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.690 07:58:13 -- accel/accel.sh@17 -- # local accel_module 00:07:02.690 07:58:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:02.690 07:58:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:02.690 07:58:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.690 07:58:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.690 07:58:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.690 07:58:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.690 07:58:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.690 07:58:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.690 07:58:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.690 07:58:13 -- accel/accel.sh@42 -- # jq -r . 00:07:02.690 [2024-12-07 07:58:13.778701] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.690 [2024-12-07 07:58:13.778795] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70699 ] 00:07:02.690 [2024-12-07 07:58:13.916661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.948 [2024-12-07 07:58:13.983721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.324 07:58:15 -- accel/accel.sh@18 -- # out=' 00:07:04.324 SPDK Configuration: 00:07:04.324 Core mask: 0x1 00:07:04.324 00:07:04.324 Accel Perf Configuration: 00:07:04.324 Workload Type: copy_crc32c 00:07:04.324 CRC-32C seed: 0 00:07:04.324 Vector size: 4096 bytes 00:07:04.324 Transfer size: 4096 bytes 00:07:04.324 Vector count 1 00:07:04.324 Module: software 00:07:04.324 Queue depth: 32 00:07:04.324 Allocate depth: 32 00:07:04.324 # threads/core: 1 00:07:04.324 Run time: 1 seconds 00:07:04.324 Verify: Yes 00:07:04.324 00:07:04.324 Running for 1 seconds... 00:07:04.324 00:07:04.324 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.324 ------------------------------------------------------------------------------------ 00:07:04.324 0,0 303584/s 1185 MiB/s 0 0 00:07:04.324 ==================================================================================== 00:07:04.324 Total 303584/s 1185 MiB/s 0 0' 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.324 07:58:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:04.324 07:58:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:04.324 07:58:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.324 07:58:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.324 07:58:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.324 07:58:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.324 07:58:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.324 07:58:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.324 07:58:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.324 07:58:15 -- accel/accel.sh@42 -- # jq -r . 00:07:04.324 [2024-12-07 07:58:15.193155] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.324 [2024-12-07 07:58:15.193271] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70721 ] 00:07:04.324 [2024-12-07 07:58:15.329850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.324 [2024-12-07 07:58:15.388724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.324 07:58:15 -- accel/accel.sh@21 -- # val= 00:07:04.324 07:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.324 07:58:15 -- accel/accel.sh@21 -- # val= 00:07:04.324 07:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.324 07:58:15 -- accel/accel.sh@21 -- # val=0x1 00:07:04.324 07:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.324 07:58:15 -- accel/accel.sh@21 -- # val= 00:07:04.324 07:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.324 07:58:15 -- accel/accel.sh@21 -- # val= 00:07:04.324 07:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.324 07:58:15 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:04.324 07:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.324 07:58:15 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.324 07:58:15 -- accel/accel.sh@21 -- # val=0 00:07:04.324 07:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.324 07:58:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.324 07:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.324 07:58:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.324 07:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.324 07:58:15 -- accel/accel.sh@21 -- # val= 00:07:04.324 07:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.324 07:58:15 -- accel/accel.sh@21 -- # val=software 00:07:04.324 07:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.324 07:58:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.324 07:58:15 -- accel/accel.sh@21 -- # val=32 00:07:04.324 07:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.324 07:58:15 -- accel/accel.sh@21 -- # val=32 00:07:04.324 07:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.324 07:58:15 -- accel/accel.sh@21 -- # val=1 00:07:04.324 07:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.324 07:58:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.324 07:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.324 07:58:15 -- accel/accel.sh@21 -- # val=Yes 00:07:04.324 07:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.324 07:58:15 -- accel/accel.sh@21 -- # val= 00:07:04.324 07:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.324 07:58:15 -- accel/accel.sh@21 -- # val= 00:07:04.324 07:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.324 07:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:05.700 07:58:16 -- accel/accel.sh@21 -- # val= 00:07:05.700 07:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.700 07:58:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.700 07:58:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.700 07:58:16 -- accel/accel.sh@21 -- # val= 00:07:05.700 07:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.700 07:58:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.700 07:58:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.700 07:58:16 -- accel/accel.sh@21 -- # val= 00:07:05.700 07:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.700 07:58:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.700 07:58:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.700 07:58:16 -- accel/accel.sh@21 -- # val= 00:07:05.700 07:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.700 07:58:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.700 07:58:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.700 07:58:16 -- accel/accel.sh@21 -- # val= 00:07:05.700 07:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.700 07:58:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.700 07:58:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.700 07:58:16 -- accel/accel.sh@21 -- # val= 00:07:05.700 07:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.700 07:58:16 -- accel/accel.sh@20 -- # IFS=: 00:07:05.700 07:58:16 -- accel/accel.sh@20 -- # read -r var val 00:07:05.700 07:58:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:05.700 07:58:16 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:05.700 07:58:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.700 00:07:05.700 real 0m2.839s 00:07:05.700 user 0m2.406s 00:07:05.700 sys 0m0.230s 00:07:05.700 ************************************ 00:07:05.700 END TEST accel_copy_crc32c 00:07:05.700 ************************************ 00:07:05.700 07:58:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.700 07:58:16 -- common/autotest_common.sh@10 -- # set +x 00:07:05.700 07:58:16 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:05.700 07:58:16 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:05.700 07:58:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.700 07:58:16 -- common/autotest_common.sh@10 -- # set +x 00:07:05.700 ************************************ 00:07:05.700 START TEST accel_copy_crc32c_C2 00:07:05.700 ************************************ 00:07:05.700 07:58:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:05.700 07:58:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.700 07:58:16 -- accel/accel.sh@17 -- # local accel_module 00:07:05.700 07:58:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:05.700 07:58:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:05.700 07:58:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.700 07:58:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.700 07:58:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.700 07:58:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.700 07:58:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.700 07:58:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.700 07:58:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.700 07:58:16 -- accel/accel.sh@42 -- # jq -r . 00:07:05.700 [2024-12-07 07:58:16.672852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.700 [2024-12-07 07:58:16.672979] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70755 ] 00:07:05.700 [2024-12-07 07:58:16.810324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.700 [2024-12-07 07:58:16.887878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.079 07:58:18 -- accel/accel.sh@18 -- # out=' 00:07:07.079 SPDK Configuration: 00:07:07.079 Core mask: 0x1 00:07:07.079 00:07:07.079 Accel Perf Configuration: 00:07:07.079 Workload Type: copy_crc32c 00:07:07.079 CRC-32C seed: 0 00:07:07.079 Vector size: 4096 bytes 00:07:07.079 Transfer size: 8192 bytes 00:07:07.079 Vector count 2 00:07:07.079 Module: software 00:07:07.079 Queue depth: 32 00:07:07.079 Allocate depth: 32 00:07:07.079 # threads/core: 1 00:07:07.079 Run time: 1 seconds 00:07:07.079 Verify: Yes 00:07:07.079 00:07:07.079 Running for 1 seconds... 00:07:07.079 00:07:07.079 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.079 ------------------------------------------------------------------------------------ 00:07:07.079 0,0 218240/s 1705 MiB/s 0 0 00:07:07.079 ==================================================================================== 00:07:07.079 Total 218240/s 852 MiB/s 0 0' 00:07:07.079 07:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.079 07:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.079 07:58:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:07.079 07:58:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:07.079 07:58:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.079 07:58:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.079 07:58:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.079 07:58:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.079 07:58:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.079 07:58:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.079 07:58:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.079 07:58:18 -- accel/accel.sh@42 -- # jq -r . 00:07:07.079 [2024-12-07 07:58:18.100104] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.079 [2024-12-07 07:58:18.100221] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70775 ] 00:07:07.079 [2024-12-07 07:58:18.236068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.079 [2024-12-07 07:58:18.300408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.339 07:58:18 -- accel/accel.sh@21 -- # val= 00:07:07.339 07:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.339 07:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.339 07:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.339 07:58:18 -- accel/accel.sh@21 -- # val= 00:07:07.339 07:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.339 07:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.339 07:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.339 07:58:18 -- accel/accel.sh@21 -- # val=0x1 00:07:07.339 07:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.339 07:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.339 07:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.339 07:58:18 -- accel/accel.sh@21 -- # val= 00:07:07.339 07:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.339 07:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.339 07:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.339 07:58:18 -- accel/accel.sh@21 -- # val= 00:07:07.339 07:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.339 07:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.340 07:58:18 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:07.340 07:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.340 07:58:18 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.340 07:58:18 -- accel/accel.sh@21 -- # val=0 00:07:07.340 07:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.340 07:58:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.340 07:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.340 07:58:18 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:07.340 07:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.340 07:58:18 -- accel/accel.sh@21 -- # val= 00:07:07.340 07:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.340 07:58:18 -- accel/accel.sh@21 -- # val=software 00:07:07.340 07:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.340 07:58:18 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.340 07:58:18 -- accel/accel.sh@21 -- # val=32 00:07:07.340 07:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.340 07:58:18 -- accel/accel.sh@21 -- # val=32 00:07:07.340 07:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.340 07:58:18 -- accel/accel.sh@21 -- # val=1 00:07:07.340 07:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.340 07:58:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.340 07:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.340 07:58:18 -- accel/accel.sh@21 -- # val=Yes 00:07:07.340 07:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.340 07:58:18 -- accel/accel.sh@21 -- # val= 00:07:07.340 07:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.340 07:58:18 -- accel/accel.sh@21 -- # val= 00:07:07.340 07:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.340 07:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:08.278 07:58:19 -- accel/accel.sh@21 -- # val= 00:07:08.278 07:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.278 07:58:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.278 07:58:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.278 07:58:19 -- accel/accel.sh@21 -- # val= 00:07:08.278 07:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.278 07:58:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.278 07:58:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.278 07:58:19 -- accel/accel.sh@21 -- # val= 00:07:08.278 07:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.278 07:58:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.278 07:58:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.278 07:58:19 -- accel/accel.sh@21 -- # val= 00:07:08.278 07:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.278 07:58:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.278 07:58:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.278 07:58:19 -- accel/accel.sh@21 -- # val= 00:07:08.278 07:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.278 07:58:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.278 07:58:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.278 07:58:19 -- accel/accel.sh@21 -- # val= 00:07:08.278 07:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.278 07:58:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.278 07:58:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.278 07:58:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.278 07:58:19 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:08.278 07:58:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.278 00:07:08.278 real 0m2.850s 00:07:08.278 user 0m2.397s 00:07:08.278 sys 0m0.249s 00:07:08.278 ************************************ 00:07:08.278 END TEST accel_copy_crc32c_C2 00:07:08.278 ************************************ 00:07:08.278 07:58:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.278 07:58:19 -- common/autotest_common.sh@10 -- # set +x 00:07:08.278 07:58:19 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:08.278 07:58:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:08.278 07:58:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.278 07:58:19 -- common/autotest_common.sh@10 -- # set +x 00:07:08.537 ************************************ 00:07:08.537 START TEST accel_dualcast 00:07:08.537 ************************************ 00:07:08.537 07:58:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:08.537 07:58:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.537 07:58:19 -- accel/accel.sh@17 -- # local accel_module 00:07:08.537 07:58:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:08.537 07:58:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:08.537 07:58:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.537 07:58:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.537 07:58:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.537 07:58:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.537 07:58:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.537 07:58:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.537 07:58:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.537 07:58:19 -- accel/accel.sh@42 -- # jq -r . 00:07:08.537 [2024-12-07 07:58:19.577384] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.537 [2024-12-07 07:58:19.577483] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70809 ] 00:07:08.537 [2024-12-07 07:58:19.714847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.537 [2024-12-07 07:58:19.784122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.915 07:58:20 -- accel/accel.sh@18 -- # out=' 00:07:09.915 SPDK Configuration: 00:07:09.915 Core mask: 0x1 00:07:09.915 00:07:09.915 Accel Perf Configuration: 00:07:09.915 Workload Type: dualcast 00:07:09.915 Transfer size: 4096 bytes 00:07:09.915 Vector count 1 00:07:09.915 Module: software 00:07:09.915 Queue depth: 32 00:07:09.915 Allocate depth: 32 00:07:09.915 # threads/core: 1 00:07:09.915 Run time: 1 seconds 00:07:09.915 Verify: Yes 00:07:09.915 00:07:09.915 Running for 1 seconds... 00:07:09.915 00:07:09.915 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.915 ------------------------------------------------------------------------------------ 00:07:09.915 0,0 413536/s 1615 MiB/s 0 0 00:07:09.915 ==================================================================================== 00:07:09.915 Total 413536/s 1615 MiB/s 0 0' 00:07:09.915 07:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:09.915 07:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:09.915 07:58:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:09.915 07:58:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:09.915 07:58:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.915 07:58:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.915 07:58:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.915 07:58:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.915 07:58:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.915 07:58:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.915 07:58:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.915 07:58:20 -- accel/accel.sh@42 -- # jq -r . 00:07:09.915 [2024-12-07 07:58:20.997181] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.915 [2024-12-07 07:58:20.997401] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70829 ] 00:07:09.915 [2024-12-07 07:58:21.123439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.915 [2024-12-07 07:58:21.176556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.174 07:58:21 -- accel/accel.sh@21 -- # val= 00:07:10.174 07:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.174 07:58:21 -- accel/accel.sh@21 -- # val= 00:07:10.174 07:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.174 07:58:21 -- accel/accel.sh@21 -- # val=0x1 00:07:10.174 07:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.174 07:58:21 -- accel/accel.sh@21 -- # val= 00:07:10.174 07:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.174 07:58:21 -- accel/accel.sh@21 -- # val= 00:07:10.174 07:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.174 07:58:21 -- accel/accel.sh@21 -- # val=dualcast 00:07:10.174 07:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.174 07:58:21 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.174 07:58:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.174 07:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.174 07:58:21 -- accel/accel.sh@21 -- # val= 00:07:10.174 07:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.174 07:58:21 -- accel/accel.sh@21 -- # val=software 00:07:10.174 07:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.174 07:58:21 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.174 07:58:21 -- accel/accel.sh@21 -- # val=32 00:07:10.174 07:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.174 07:58:21 -- accel/accel.sh@21 -- # val=32 00:07:10.174 07:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.174 07:58:21 -- accel/accel.sh@21 -- # val=1 00:07:10.174 07:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.174 07:58:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.174 07:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.174 07:58:21 -- accel/accel.sh@21 -- # val=Yes 00:07:10.174 07:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.174 07:58:21 -- accel/accel.sh@21 -- # val= 00:07:10.174 07:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.174 07:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.175 07:58:21 -- accel/accel.sh@21 -- # val= 00:07:10.175 07:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.175 07:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.175 07:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:11.111 07:58:22 -- accel/accel.sh@21 -- # val= 00:07:11.112 07:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.112 07:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.112 07:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.112 07:58:22 -- accel/accel.sh@21 -- # val= 00:07:11.112 07:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.112 07:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.112 07:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.112 07:58:22 -- accel/accel.sh@21 -- # val= 00:07:11.112 07:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.112 07:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.112 07:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.112 07:58:22 -- accel/accel.sh@21 -- # val= 00:07:11.112 07:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.112 07:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.112 07:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.112 07:58:22 -- accel/accel.sh@21 -- # val= 00:07:11.112 07:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.112 07:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.112 07:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.112 07:58:22 -- accel/accel.sh@21 -- # val= 00:07:11.112 07:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.112 07:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.112 07:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.112 07:58:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.112 07:58:22 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:11.112 07:58:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.112 00:07:11.112 real 0m2.819s 00:07:11.112 user 0m2.396s 00:07:11.112 sys 0m0.218s 00:07:11.112 07:58:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.112 07:58:22 -- common/autotest_common.sh@10 -- # set +x 00:07:11.112 ************************************ 00:07:11.112 END TEST accel_dualcast 00:07:11.112 ************************************ 00:07:11.371 07:58:22 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:11.371 07:58:22 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:11.371 07:58:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.371 07:58:22 -- common/autotest_common.sh@10 -- # set +x 00:07:11.371 ************************************ 00:07:11.371 START TEST accel_compare 00:07:11.371 ************************************ 00:07:11.371 07:58:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:11.371 07:58:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.371 07:58:22 -- accel/accel.sh@17 -- # local accel_module 00:07:11.371 07:58:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:11.371 07:58:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:11.371 07:58:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.371 07:58:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.371 07:58:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.371 07:58:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.371 07:58:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.371 07:58:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.371 07:58:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.371 07:58:22 -- accel/accel.sh@42 -- # jq -r . 00:07:11.371 [2024-12-07 07:58:22.446134] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.371 [2024-12-07 07:58:22.446266] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70858 ] 00:07:11.371 [2024-12-07 07:58:22.578197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.630 [2024-12-07 07:58:22.645718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.565 07:58:23 -- accel/accel.sh@18 -- # out=' 00:07:12.565 SPDK Configuration: 00:07:12.565 Core mask: 0x1 00:07:12.565 00:07:12.565 Accel Perf Configuration: 00:07:12.565 Workload Type: compare 00:07:12.565 Transfer size: 4096 bytes 00:07:12.565 Vector count 1 00:07:12.565 Module: software 00:07:12.565 Queue depth: 32 00:07:12.565 Allocate depth: 32 00:07:12.565 # threads/core: 1 00:07:12.565 Run time: 1 seconds 00:07:12.565 Verify: Yes 00:07:12.565 00:07:12.565 Running for 1 seconds... 00:07:12.565 00:07:12.565 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.565 ------------------------------------------------------------------------------------ 00:07:12.565 0,0 539296/s 2106 MiB/s 0 0 00:07:12.565 ==================================================================================== 00:07:12.565 Total 539296/s 2106 MiB/s 0 0' 00:07:12.565 07:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.565 07:58:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:12.565 07:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.565 07:58:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:12.565 07:58:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.823 07:58:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.823 07:58:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.823 07:58:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.823 07:58:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.823 07:58:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.823 07:58:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.823 07:58:23 -- accel/accel.sh@42 -- # jq -r . 00:07:12.823 [2024-12-07 07:58:23.862546] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.823 [2024-12-07 07:58:23.862642] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70877 ] 00:07:12.823 [2024-12-07 07:58:24.000837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.823 [2024-12-07 07:58:24.066642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.082 07:58:24 -- accel/accel.sh@21 -- # val= 00:07:13.082 07:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.082 07:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.083 07:58:24 -- accel/accel.sh@21 -- # val= 00:07:13.083 07:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.083 07:58:24 -- accel/accel.sh@21 -- # val=0x1 00:07:13.083 07:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.083 07:58:24 -- accel/accel.sh@21 -- # val= 00:07:13.083 07:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.083 07:58:24 -- accel/accel.sh@21 -- # val= 00:07:13.083 07:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.083 07:58:24 -- accel/accel.sh@21 -- # val=compare 00:07:13.083 07:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.083 07:58:24 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.083 07:58:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.083 07:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.083 07:58:24 -- accel/accel.sh@21 -- # val= 00:07:13.083 07:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.083 07:58:24 -- accel/accel.sh@21 -- # val=software 00:07:13.083 07:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.083 07:58:24 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.083 07:58:24 -- accel/accel.sh@21 -- # val=32 00:07:13.083 07:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.083 07:58:24 -- accel/accel.sh@21 -- # val=32 00:07:13.083 07:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.083 07:58:24 -- accel/accel.sh@21 -- # val=1 00:07:13.083 07:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.083 07:58:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.083 07:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.083 07:58:24 -- accel/accel.sh@21 -- # val=Yes 00:07:13.083 07:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.083 07:58:24 -- accel/accel.sh@21 -- # val= 00:07:13.083 07:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.083 07:58:24 -- accel/accel.sh@21 -- # val= 00:07:13.083 07:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:13.083 07:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:14.027 07:58:25 -- accel/accel.sh@21 -- # val= 00:07:14.027 07:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.027 07:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:14.027 07:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:14.027 07:58:25 -- accel/accel.sh@21 -- # val= 00:07:14.027 07:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.027 07:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:14.027 07:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:14.027 07:58:25 -- accel/accel.sh@21 -- # val= 00:07:14.027 07:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.027 07:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:14.028 07:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:14.028 07:58:25 -- accel/accel.sh@21 -- # val= 00:07:14.028 07:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.028 07:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:14.028 07:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:14.028 07:58:25 -- accel/accel.sh@21 -- # val= 00:07:14.028 07:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.028 07:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:14.028 07:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:14.028 07:58:25 -- accel/accel.sh@21 -- # val= 00:07:14.028 07:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.028 07:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:14.028 07:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:14.028 07:58:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.028 07:58:25 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:14.028 07:58:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.028 ************************************ 00:07:14.028 END TEST accel_compare 00:07:14.028 ************************************ 00:07:14.028 00:07:14.028 real 0m2.842s 00:07:14.028 user 0m2.414s 00:07:14.028 sys 0m0.224s 00:07:14.028 07:58:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.028 07:58:25 -- common/autotest_common.sh@10 -- # set +x 00:07:14.289 07:58:25 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:14.289 07:58:25 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:14.289 07:58:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.289 07:58:25 -- common/autotest_common.sh@10 -- # set +x 00:07:14.289 ************************************ 00:07:14.289 START TEST accel_xor 00:07:14.289 ************************************ 00:07:14.289 07:58:25 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:14.289 07:58:25 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.289 07:58:25 -- accel/accel.sh@17 -- # local accel_module 00:07:14.289 07:58:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:14.289 07:58:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:14.289 07:58:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.289 07:58:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.289 07:58:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.289 07:58:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.289 07:58:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.289 07:58:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.289 07:58:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.289 07:58:25 -- accel/accel.sh@42 -- # jq -r . 00:07:14.289 [2024-12-07 07:58:25.338881] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.289 [2024-12-07 07:58:25.338974] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70912 ] 00:07:14.289 [2024-12-07 07:58:25.467295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.289 [2024-12-07 07:58:25.532094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.665 07:58:26 -- accel/accel.sh@18 -- # out=' 00:07:15.665 SPDK Configuration: 00:07:15.665 Core mask: 0x1 00:07:15.665 00:07:15.665 Accel Perf Configuration: 00:07:15.665 Workload Type: xor 00:07:15.665 Source buffers: 2 00:07:15.665 Transfer size: 4096 bytes 00:07:15.665 Vector count 1 00:07:15.665 Module: software 00:07:15.665 Queue depth: 32 00:07:15.665 Allocate depth: 32 00:07:15.665 # threads/core: 1 00:07:15.665 Run time: 1 seconds 00:07:15.665 Verify: Yes 00:07:15.665 00:07:15.665 Running for 1 seconds... 00:07:15.665 00:07:15.665 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.665 ------------------------------------------------------------------------------------ 00:07:15.665 0,0 284352/s 1110 MiB/s 0 0 00:07:15.665 ==================================================================================== 00:07:15.665 Total 284352/s 1110 MiB/s 0 0' 00:07:15.665 07:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.665 07:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.665 07:58:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:15.665 07:58:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:15.665 07:58:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.665 07:58:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.665 07:58:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.665 07:58:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.665 07:58:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.665 07:58:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.665 07:58:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.665 07:58:26 -- accel/accel.sh@42 -- # jq -r . 00:07:15.665 [2024-12-07 07:58:26.741805] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.665 [2024-12-07 07:58:26.741894] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70926 ] 00:07:15.665 [2024-12-07 07:58:26.881415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.924 [2024-12-07 07:58:26.939921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.924 07:58:26 -- accel/accel.sh@21 -- # val= 00:07:15.924 07:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.924 07:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.924 07:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.924 07:58:26 -- accel/accel.sh@21 -- # val= 00:07:15.924 07:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.924 07:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.924 07:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.924 07:58:26 -- accel/accel.sh@21 -- # val=0x1 00:07:15.924 07:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.924 07:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.924 07:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.924 07:58:26 -- accel/accel.sh@21 -- # val= 00:07:15.924 07:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.924 07:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.924 07:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.924 07:58:26 -- accel/accel.sh@21 -- # val= 00:07:15.924 07:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.924 07:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.924 07:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.924 07:58:26 -- accel/accel.sh@21 -- # val=xor 00:07:15.924 07:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.924 07:58:26 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:15.924 07:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.924 07:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.924 07:58:26 -- accel/accel.sh@21 -- # val=2 00:07:15.924 07:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.924 07:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.924 07:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.924 07:58:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.924 07:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.924 07:58:27 -- accel/accel.sh@20 -- # IFS=: 00:07:15.924 07:58:27 -- accel/accel.sh@20 -- # read -r var val 00:07:15.924 07:58:27 -- accel/accel.sh@21 -- # val= 00:07:15.924 07:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.924 07:58:27 -- accel/accel.sh@20 -- # IFS=: 00:07:15.924 07:58:27 -- accel/accel.sh@20 -- # read -r var val 00:07:15.924 07:58:27 -- accel/accel.sh@21 -- # val=software 00:07:15.924 07:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.924 07:58:27 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.924 07:58:27 -- accel/accel.sh@20 -- # IFS=: 00:07:15.924 07:58:27 -- accel/accel.sh@20 -- # read -r var val 00:07:15.924 07:58:27 -- accel/accel.sh@21 -- # val=32 00:07:15.924 07:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.924 07:58:27 -- accel/accel.sh@20 -- # IFS=: 00:07:15.924 07:58:27 -- accel/accel.sh@20 -- # read -r var val 00:07:15.924 07:58:27 -- accel/accel.sh@21 -- # val=32 00:07:15.924 07:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.924 07:58:27 -- accel/accel.sh@20 -- # IFS=: 00:07:15.924 07:58:27 -- accel/accel.sh@20 -- # read -r var val 00:07:15.924 07:58:27 -- accel/accel.sh@21 -- # val=1 00:07:15.924 07:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.924 07:58:27 -- accel/accel.sh@20 -- # IFS=: 00:07:15.924 07:58:27 -- accel/accel.sh@20 -- # read -r var val 00:07:15.924 07:58:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.924 07:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.924 07:58:27 -- accel/accel.sh@20 -- # IFS=: 00:07:15.924 07:58:27 -- accel/accel.sh@20 -- # read -r var val 00:07:15.924 07:58:27 -- accel/accel.sh@21 -- # val=Yes 00:07:15.924 07:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.924 07:58:27 -- accel/accel.sh@20 -- # IFS=: 00:07:15.924 07:58:27 -- accel/accel.sh@20 -- # read -r var val 00:07:15.924 07:58:27 -- accel/accel.sh@21 -- # val= 00:07:15.924 07:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.924 07:58:27 -- accel/accel.sh@20 -- # IFS=: 00:07:15.924 07:58:27 -- accel/accel.sh@20 -- # read -r var val 00:07:15.924 07:58:27 -- accel/accel.sh@21 -- # val= 00:07:15.924 07:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.924 07:58:27 -- accel/accel.sh@20 -- # IFS=: 00:07:15.924 07:58:27 -- accel/accel.sh@20 -- # read -r var val 00:07:16.863 07:58:28 -- accel/accel.sh@21 -- # val= 00:07:16.863 07:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.863 07:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:16.863 07:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:16.863 07:58:28 -- accel/accel.sh@21 -- # val= 00:07:16.863 07:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.863 07:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:16.863 07:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:16.863 07:58:28 -- accel/accel.sh@21 -- # val= 00:07:16.863 07:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.863 07:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:16.863 07:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:16.863 07:58:28 -- accel/accel.sh@21 -- # val= 00:07:16.863 07:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.863 07:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:16.863 07:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:16.863 07:58:28 -- accel/accel.sh@21 -- # val= 00:07:16.863 07:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.863 07:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:17.122 ************************************ 00:07:17.122 END TEST accel_xor 00:07:17.122 ************************************ 00:07:17.122 07:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:17.122 07:58:28 -- accel/accel.sh@21 -- # val= 00:07:17.123 07:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.123 07:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:17.123 07:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:17.123 07:58:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.123 07:58:28 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:17.123 07:58:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.123 00:07:17.123 real 0m2.821s 00:07:17.123 user 0m2.397s 00:07:17.123 sys 0m0.220s 00:07:17.123 07:58:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.123 07:58:28 -- common/autotest_common.sh@10 -- # set +x 00:07:17.123 07:58:28 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:17.123 07:58:28 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:17.123 07:58:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.123 07:58:28 -- common/autotest_common.sh@10 -- # set +x 00:07:17.123 ************************************ 00:07:17.123 START TEST accel_xor 00:07:17.123 ************************************ 00:07:17.123 07:58:28 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:17.123 07:58:28 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.123 07:58:28 -- accel/accel.sh@17 -- # local accel_module 00:07:17.123 07:58:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:17.123 07:58:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:17.123 07:58:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.123 07:58:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.123 07:58:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.123 07:58:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.123 07:58:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.123 07:58:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.123 07:58:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.123 07:58:28 -- accel/accel.sh@42 -- # jq -r . 00:07:17.123 [2024-12-07 07:58:28.212802] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.123 [2024-12-07 07:58:28.212886] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70966 ] 00:07:17.123 [2024-12-07 07:58:28.344105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.381 [2024-12-07 07:58:28.406435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.329 07:58:29 -- accel/accel.sh@18 -- # out=' 00:07:18.329 SPDK Configuration: 00:07:18.329 Core mask: 0x1 00:07:18.329 00:07:18.329 Accel Perf Configuration: 00:07:18.329 Workload Type: xor 00:07:18.329 Source buffers: 3 00:07:18.329 Transfer size: 4096 bytes 00:07:18.329 Vector count 1 00:07:18.329 Module: software 00:07:18.329 Queue depth: 32 00:07:18.329 Allocate depth: 32 00:07:18.329 # threads/core: 1 00:07:18.329 Run time: 1 seconds 00:07:18.329 Verify: Yes 00:07:18.329 00:07:18.329 Running for 1 seconds... 00:07:18.329 00:07:18.329 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:18.329 ------------------------------------------------------------------------------------ 00:07:18.329 0,0 268160/s 1047 MiB/s 0 0 00:07:18.329 ==================================================================================== 00:07:18.329 Total 268160/s 1047 MiB/s 0 0' 00:07:18.329 07:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.329 07:58:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:18.329 07:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.639 07:58:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.639 07:58:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:18.639 07:58:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.639 07:58:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.639 07:58:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.639 07:58:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.639 07:58:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.639 07:58:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.639 07:58:29 -- accel/accel.sh@42 -- # jq -r . 00:07:18.639 [2024-12-07 07:58:29.621883] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:18.639 [2024-12-07 07:58:29.622416] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70980 ] 00:07:18.639 [2024-12-07 07:58:29.751295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.639 [2024-12-07 07:58:29.818640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.639 07:58:29 -- accel/accel.sh@21 -- # val= 00:07:18.639 07:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.639 07:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.639 07:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.639 07:58:29 -- accel/accel.sh@21 -- # val= 00:07:18.639 07:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.639 07:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.639 07:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.639 07:58:29 -- accel/accel.sh@21 -- # val=0x1 00:07:18.639 07:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.639 07:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.639 07:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.639 07:58:29 -- accel/accel.sh@21 -- # val= 00:07:18.639 07:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.639 07:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.639 07:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.639 07:58:29 -- accel/accel.sh@21 -- # val= 00:07:18.639 07:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.639 07:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.639 07:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.639 07:58:29 -- accel/accel.sh@21 -- # val=xor 00:07:18.639 07:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.639 07:58:29 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:18.639 07:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.639 07:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.639 07:58:29 -- accel/accel.sh@21 -- # val=3 00:07:18.639 07:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.639 07:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.906 07:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.906 07:58:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.906 07:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.906 07:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.906 07:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.906 07:58:29 -- accel/accel.sh@21 -- # val= 00:07:18.906 07:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.906 07:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.906 07:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.906 07:58:29 -- accel/accel.sh@21 -- # val=software 00:07:18.906 07:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.906 07:58:29 -- accel/accel.sh@23 -- # accel_module=software 00:07:18.906 07:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.906 07:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.906 07:58:29 -- accel/accel.sh@21 -- # val=32 00:07:18.906 07:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.906 07:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.906 07:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.906 07:58:29 -- accel/accel.sh@21 -- # val=32 00:07:18.906 07:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.907 07:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.907 07:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.907 07:58:29 -- accel/accel.sh@21 -- # val=1 00:07:18.907 07:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.907 07:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.907 07:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.907 07:58:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:18.907 07:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.907 07:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.907 07:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.907 07:58:29 -- accel/accel.sh@21 -- # val=Yes 00:07:18.907 07:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.907 07:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.907 07:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.907 07:58:29 -- accel/accel.sh@21 -- # val= 00:07:18.907 07:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.907 07:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.907 07:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.907 07:58:29 -- accel/accel.sh@21 -- # val= 00:07:18.907 07:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.907 07:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.907 07:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:19.839 07:58:31 -- accel/accel.sh@21 -- # val= 00:07:19.839 07:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.839 07:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:19.839 07:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:19.839 07:58:31 -- accel/accel.sh@21 -- # val= 00:07:19.839 07:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.839 07:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:19.839 07:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:19.839 07:58:31 -- accel/accel.sh@21 -- # val= 00:07:19.839 07:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.839 07:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:19.839 07:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:19.839 07:58:31 -- accel/accel.sh@21 -- # val= 00:07:19.839 07:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.839 07:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:19.839 07:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:19.839 07:58:31 -- accel/accel.sh@21 -- # val= 00:07:19.839 07:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.839 07:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:19.839 07:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:19.839 07:58:31 -- accel/accel.sh@21 -- # val= 00:07:19.839 07:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.839 07:58:31 -- accel/accel.sh@20 -- # IFS=: 00:07:19.839 07:58:31 -- accel/accel.sh@20 -- # read -r var val 00:07:19.839 07:58:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.839 07:58:31 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:19.839 07:58:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.839 00:07:19.839 real 0m2.826s 00:07:19.839 user 0m2.401s 00:07:19.839 sys 0m0.217s 00:07:19.839 07:58:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.839 07:58:31 -- common/autotest_common.sh@10 -- # set +x 00:07:19.839 ************************************ 00:07:19.839 END TEST accel_xor 00:07:19.839 ************************************ 00:07:19.839 07:58:31 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:19.839 07:58:31 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:19.839 07:58:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.839 07:58:31 -- common/autotest_common.sh@10 -- # set +x 00:07:19.839 ************************************ 00:07:19.839 START TEST accel_dif_verify 00:07:19.839 ************************************ 00:07:19.839 07:58:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:19.839 07:58:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.839 07:58:31 -- accel/accel.sh@17 -- # local accel_module 00:07:19.839 07:58:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:19.839 07:58:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:19.839 07:58:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.839 07:58:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.839 07:58:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.839 07:58:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.839 07:58:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.839 07:58:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.839 07:58:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.839 07:58:31 -- accel/accel.sh@42 -- # jq -r . 00:07:19.839 [2024-12-07 07:58:31.101818] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.839 [2024-12-07 07:58:31.101911] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71014 ] 00:07:20.097 [2024-12-07 07:58:31.240297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.097 [2024-12-07 07:58:31.301328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.471 07:58:32 -- accel/accel.sh@18 -- # out=' 00:07:21.471 SPDK Configuration: 00:07:21.471 Core mask: 0x1 00:07:21.471 00:07:21.471 Accel Perf Configuration: 00:07:21.471 Workload Type: dif_verify 00:07:21.471 Vector size: 4096 bytes 00:07:21.471 Transfer size: 4096 bytes 00:07:21.471 Block size: 512 bytes 00:07:21.471 Metadata size: 8 bytes 00:07:21.471 Vector count 1 00:07:21.471 Module: software 00:07:21.471 Queue depth: 32 00:07:21.471 Allocate depth: 32 00:07:21.471 # threads/core: 1 00:07:21.471 Run time: 1 seconds 00:07:21.471 Verify: No 00:07:21.471 00:07:21.471 Running for 1 seconds... 00:07:21.471 00:07:21.471 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:21.471 ------------------------------------------------------------------------------------ 00:07:21.471 0,0 120704/s 478 MiB/s 0 0 00:07:21.471 ==================================================================================== 00:07:21.471 Total 120704/s 471 MiB/s 0 0' 00:07:21.471 07:58:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:21.471 07:58:32 -- accel/accel.sh@20 -- # IFS=: 00:07:21.471 07:58:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.471 07:58:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:21.471 07:58:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.471 07:58:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.471 07:58:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.471 07:58:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.471 07:58:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.471 07:58:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.471 07:58:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.471 07:58:32 -- accel/accel.sh@42 -- # jq -r . 00:07:21.471 [2024-12-07 07:58:32.502527] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.471 [2024-12-07 07:58:32.502646] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71034 ] 00:07:21.471 [2024-12-07 07:58:32.626525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.471 [2024-12-07 07:58:32.684558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.471 07:58:32 -- accel/accel.sh@21 -- # val= 00:07:21.471 07:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.471 07:58:32 -- accel/accel.sh@20 -- # IFS=: 00:07:21.471 07:58:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.471 07:58:32 -- accel/accel.sh@21 -- # val= 00:07:21.471 07:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.471 07:58:32 -- accel/accel.sh@20 -- # IFS=: 00:07:21.471 07:58:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.471 07:58:32 -- accel/accel.sh@21 -- # val=0x1 00:07:21.471 07:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.471 07:58:32 -- accel/accel.sh@20 -- # IFS=: 00:07:21.471 07:58:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.471 07:58:32 -- accel/accel.sh@21 -- # val= 00:07:21.471 07:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.471 07:58:32 -- accel/accel.sh@20 -- # IFS=: 00:07:21.471 07:58:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.471 07:58:32 -- accel/accel.sh@21 -- # val= 00:07:21.471 07:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.471 07:58:32 -- accel/accel.sh@20 -- # IFS=: 00:07:21.471 07:58:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.471 07:58:32 -- accel/accel.sh@21 -- # val=dif_verify 00:07:21.731 07:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.731 07:58:32 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:21.731 07:58:32 -- accel/accel.sh@20 -- # IFS=: 00:07:21.731 07:58:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.731 07:58:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:21.731 07:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.731 07:58:32 -- accel/accel.sh@20 -- # IFS=: 00:07:21.731 07:58:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.731 07:58:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:21.731 07:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.731 07:58:32 -- accel/accel.sh@20 -- # IFS=: 00:07:21.731 07:58:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.731 07:58:32 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:21.731 07:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.731 07:58:32 -- accel/accel.sh@20 -- # IFS=: 00:07:21.731 07:58:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.731 07:58:32 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:21.731 07:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.731 07:58:32 -- accel/accel.sh@20 -- # IFS=: 00:07:21.731 07:58:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.731 07:58:32 -- accel/accel.sh@21 -- # val= 00:07:21.731 07:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.731 07:58:32 -- accel/accel.sh@20 -- # IFS=: 00:07:21.731 07:58:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.731 07:58:32 -- accel/accel.sh@21 -- # val=software 00:07:21.731 07:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.731 07:58:32 -- accel/accel.sh@23 -- # accel_module=software 00:07:21.731 07:58:32 -- accel/accel.sh@20 -- # IFS=: 00:07:21.731 07:58:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.731 07:58:32 -- accel/accel.sh@21 -- # val=32 00:07:21.732 07:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.732 07:58:32 -- accel/accel.sh@20 -- # IFS=: 00:07:21.732 07:58:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.732 07:58:32 -- accel/accel.sh@21 -- # val=32 00:07:21.732 07:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.732 07:58:32 -- accel/accel.sh@20 -- # IFS=: 00:07:21.732 07:58:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.732 07:58:32 -- accel/accel.sh@21 -- # val=1 00:07:21.732 07:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.732 07:58:32 -- accel/accel.sh@20 -- # IFS=: 00:07:21.732 07:58:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.732 07:58:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:21.732 07:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.732 07:58:32 -- accel/accel.sh@20 -- # IFS=: 00:07:21.732 07:58:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.732 07:58:32 -- accel/accel.sh@21 -- # val=No 00:07:21.732 07:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.732 07:58:32 -- accel/accel.sh@20 -- # IFS=: 00:07:21.732 07:58:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.732 07:58:32 -- accel/accel.sh@21 -- # val= 00:07:21.732 07:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.732 07:58:32 -- accel/accel.sh@20 -- # IFS=: 00:07:21.732 07:58:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.732 07:58:32 -- accel/accel.sh@21 -- # val= 00:07:21.732 07:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.732 07:58:32 -- accel/accel.sh@20 -- # IFS=: 00:07:21.732 07:58:32 -- accel/accel.sh@20 -- # read -r var val 00:07:22.666 07:58:33 -- accel/accel.sh@21 -- # val= 00:07:22.666 07:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.666 07:58:33 -- accel/accel.sh@20 -- # IFS=: 00:07:22.666 07:58:33 -- accel/accel.sh@20 -- # read -r var val 00:07:22.666 07:58:33 -- accel/accel.sh@21 -- # val= 00:07:22.666 07:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.666 07:58:33 -- accel/accel.sh@20 -- # IFS=: 00:07:22.666 07:58:33 -- accel/accel.sh@20 -- # read -r var val 00:07:22.666 07:58:33 -- accel/accel.sh@21 -- # val= 00:07:22.666 07:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.666 07:58:33 -- accel/accel.sh@20 -- # IFS=: 00:07:22.666 07:58:33 -- accel/accel.sh@20 -- # read -r var val 00:07:22.666 07:58:33 -- accel/accel.sh@21 -- # val= 00:07:22.666 07:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.666 07:58:33 -- accel/accel.sh@20 -- # IFS=: 00:07:22.666 07:58:33 -- accel/accel.sh@20 -- # read -r var val 00:07:22.666 07:58:33 -- accel/accel.sh@21 -- # val= 00:07:22.666 07:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.666 07:58:33 -- accel/accel.sh@20 -- # IFS=: 00:07:22.666 07:58:33 -- accel/accel.sh@20 -- # read -r var val 00:07:22.666 07:58:33 -- accel/accel.sh@21 -- # val= 00:07:22.666 07:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.666 07:58:33 -- accel/accel.sh@20 -- # IFS=: 00:07:22.666 07:58:33 -- accel/accel.sh@20 -- # read -r var val 00:07:22.666 07:58:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:22.666 07:58:33 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:22.667 07:58:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.667 00:07:22.667 real 0m2.797s 00:07:22.667 user 0m2.385s 00:07:22.667 sys 0m0.210s 00:07:22.667 07:58:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:22.667 ************************************ 00:07:22.667 END TEST accel_dif_verify 00:07:22.667 ************************************ 00:07:22.667 07:58:33 -- common/autotest_common.sh@10 -- # set +x 00:07:22.667 07:58:33 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:22.667 07:58:33 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:22.667 07:58:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.667 07:58:33 -- common/autotest_common.sh@10 -- # set +x 00:07:22.667 ************************************ 00:07:22.667 START TEST accel_dif_generate 00:07:22.667 ************************************ 00:07:22.667 07:58:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:22.667 07:58:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.667 07:58:33 -- accel/accel.sh@17 -- # local accel_module 00:07:22.667 07:58:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:22.667 07:58:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:22.667 07:58:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.667 07:58:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.667 07:58:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.667 07:58:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.667 07:58:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.667 07:58:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.667 07:58:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.667 07:58:33 -- accel/accel.sh@42 -- # jq -r . 00:07:22.925 [2024-12-07 07:58:33.949448] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.925 [2024-12-07 07:58:33.949684] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71063 ] 00:07:22.925 [2024-12-07 07:58:34.086225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.925 [2024-12-07 07:58:34.150862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.303 07:58:35 -- accel/accel.sh@18 -- # out=' 00:07:24.303 SPDK Configuration: 00:07:24.303 Core mask: 0x1 00:07:24.303 00:07:24.303 Accel Perf Configuration: 00:07:24.303 Workload Type: dif_generate 00:07:24.303 Vector size: 4096 bytes 00:07:24.303 Transfer size: 4096 bytes 00:07:24.303 Block size: 512 bytes 00:07:24.303 Metadata size: 8 bytes 00:07:24.303 Vector count 1 00:07:24.303 Module: software 00:07:24.303 Queue depth: 32 00:07:24.303 Allocate depth: 32 00:07:24.303 # threads/core: 1 00:07:24.303 Run time: 1 seconds 00:07:24.303 Verify: No 00:07:24.303 00:07:24.303 Running for 1 seconds... 00:07:24.303 00:07:24.303 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:24.303 ------------------------------------------------------------------------------------ 00:07:24.303 0,0 149824/s 594 MiB/s 0 0 00:07:24.303 ==================================================================================== 00:07:24.303 Total 149824/s 585 MiB/s 0 0' 00:07:24.303 07:58:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:24.303 07:58:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.303 07:58:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.303 07:58:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:24.303 07:58:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.303 07:58:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.303 07:58:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.303 07:58:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.303 07:58:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.303 07:58:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.303 07:58:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.303 07:58:35 -- accel/accel.sh@42 -- # jq -r . 00:07:24.303 [2024-12-07 07:58:35.361430] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:24.303 [2024-12-07 07:58:35.361539] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71088 ] 00:07:24.303 [2024-12-07 07:58:35.490340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.303 [2024-12-07 07:58:35.541602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.561 07:58:35 -- accel/accel.sh@21 -- # val= 00:07:24.561 07:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.561 07:58:35 -- accel/accel.sh@21 -- # val= 00:07:24.561 07:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.561 07:58:35 -- accel/accel.sh@21 -- # val=0x1 00:07:24.561 07:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.561 07:58:35 -- accel/accel.sh@21 -- # val= 00:07:24.561 07:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.561 07:58:35 -- accel/accel.sh@21 -- # val= 00:07:24.561 07:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.561 07:58:35 -- accel/accel.sh@21 -- # val=dif_generate 00:07:24.561 07:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.561 07:58:35 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.561 07:58:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.561 07:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.561 07:58:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.561 07:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.561 07:58:35 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:24.561 07:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.561 07:58:35 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:24.561 07:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.561 07:58:35 -- accel/accel.sh@21 -- # val= 00:07:24.561 07:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.561 07:58:35 -- accel/accel.sh@21 -- # val=software 00:07:24.561 07:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.561 07:58:35 -- accel/accel.sh@23 -- # accel_module=software 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.561 07:58:35 -- accel/accel.sh@21 -- # val=32 00:07:24.561 07:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.561 07:58:35 -- accel/accel.sh@21 -- # val=32 00:07:24.561 07:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.561 07:58:35 -- accel/accel.sh@21 -- # val=1 00:07:24.561 07:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.561 07:58:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.562 07:58:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.562 07:58:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:24.562 07:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.562 07:58:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.562 07:58:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.562 07:58:35 -- accel/accel.sh@21 -- # val=No 00:07:24.562 07:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.562 07:58:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.562 07:58:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.562 07:58:35 -- accel/accel.sh@21 -- # val= 00:07:24.562 07:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.562 07:58:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.562 07:58:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.562 07:58:35 -- accel/accel.sh@21 -- # val= 00:07:24.562 07:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.562 07:58:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.562 07:58:35 -- accel/accel.sh@20 -- # read -r var val 00:07:25.497 07:58:36 -- accel/accel.sh@21 -- # val= 00:07:25.497 07:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.497 07:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:25.497 07:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:25.497 07:58:36 -- accel/accel.sh@21 -- # val= 00:07:25.497 07:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.497 07:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:25.497 07:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:25.497 07:58:36 -- accel/accel.sh@21 -- # val= 00:07:25.497 07:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.497 07:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:25.497 07:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:25.497 07:58:36 -- accel/accel.sh@21 -- # val= 00:07:25.497 07:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.497 07:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:25.497 07:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:25.497 07:58:36 -- accel/accel.sh@21 -- # val= 00:07:25.497 07:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.497 07:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:25.497 07:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:25.497 07:58:36 -- accel/accel.sh@21 -- # val= 00:07:25.497 07:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.497 07:58:36 -- accel/accel.sh@20 -- # IFS=: 00:07:25.497 07:58:36 -- accel/accel.sh@20 -- # read -r var val 00:07:25.497 07:58:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.497 07:58:36 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:25.497 07:58:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.497 00:07:25.497 real 0m2.811s 00:07:25.497 user 0m2.390s 00:07:25.497 sys 0m0.222s 00:07:25.497 07:58:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:25.497 ************************************ 00:07:25.497 END TEST accel_dif_generate 00:07:25.497 ************************************ 00:07:25.497 07:58:36 -- common/autotest_common.sh@10 -- # set +x 00:07:25.756 07:58:36 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:25.756 07:58:36 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:25.756 07:58:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.756 07:58:36 -- common/autotest_common.sh@10 -- # set +x 00:07:25.756 ************************************ 00:07:25.756 START TEST accel_dif_generate_copy 00:07:25.756 ************************************ 00:07:25.756 07:58:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:25.756 07:58:36 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.756 07:58:36 -- accel/accel.sh@17 -- # local accel_module 00:07:25.756 07:58:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:25.756 07:58:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:25.756 07:58:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.756 07:58:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.756 07:58:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.756 07:58:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.756 07:58:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.756 07:58:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.756 07:58:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.756 07:58:36 -- accel/accel.sh@42 -- # jq -r . 00:07:25.756 [2024-12-07 07:58:36.808422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.756 [2024-12-07 07:58:36.808510] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71117 ] 00:07:25.756 [2024-12-07 07:58:36.937844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.756 [2024-12-07 07:58:36.996590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.134 07:58:38 -- accel/accel.sh@18 -- # out=' 00:07:27.134 SPDK Configuration: 00:07:27.134 Core mask: 0x1 00:07:27.134 00:07:27.134 Accel Perf Configuration: 00:07:27.134 Workload Type: dif_generate_copy 00:07:27.134 Vector size: 4096 bytes 00:07:27.134 Transfer size: 4096 bytes 00:07:27.134 Vector count 1 00:07:27.134 Module: software 00:07:27.134 Queue depth: 32 00:07:27.134 Allocate depth: 32 00:07:27.134 # threads/core: 1 00:07:27.134 Run time: 1 seconds 00:07:27.134 Verify: No 00:07:27.134 00:07:27.134 Running for 1 seconds... 00:07:27.134 00:07:27.134 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:27.134 ------------------------------------------------------------------------------------ 00:07:27.134 0,0 112992/s 448 MiB/s 0 0 00:07:27.134 ==================================================================================== 00:07:27.134 Total 112992/s 441 MiB/s 0 0' 00:07:27.134 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.134 07:58:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:27.134 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.134 07:58:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:27.134 07:58:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.134 07:58:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.134 07:58:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.134 07:58:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.134 07:58:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.134 07:58:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.134 07:58:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.134 07:58:38 -- accel/accel.sh@42 -- # jq -r . 00:07:27.134 [2024-12-07 07:58:38.210865] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.134 [2024-12-07 07:58:38.210958] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71131 ] 00:07:27.134 [2024-12-07 07:58:38.349297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.394 [2024-12-07 07:58:38.410846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.394 07:58:38 -- accel/accel.sh@21 -- # val= 00:07:27.394 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.394 07:58:38 -- accel/accel.sh@21 -- # val= 00:07:27.394 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.394 07:58:38 -- accel/accel.sh@21 -- # val=0x1 00:07:27.394 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.394 07:58:38 -- accel/accel.sh@21 -- # val= 00:07:27.394 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.394 07:58:38 -- accel/accel.sh@21 -- # val= 00:07:27.394 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.394 07:58:38 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:27.394 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.394 07:58:38 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.394 07:58:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:27.394 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.394 07:58:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:27.394 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.394 07:58:38 -- accel/accel.sh@21 -- # val= 00:07:27.394 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.394 07:58:38 -- accel/accel.sh@21 -- # val=software 00:07:27.394 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.394 07:58:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.394 07:58:38 -- accel/accel.sh@21 -- # val=32 00:07:27.394 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.394 07:58:38 -- accel/accel.sh@21 -- # val=32 00:07:27.394 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.394 07:58:38 -- accel/accel.sh@21 -- # val=1 00:07:27.394 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.394 07:58:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:27.394 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.394 07:58:38 -- accel/accel.sh@21 -- # val=No 00:07:27.394 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.394 07:58:38 -- accel/accel.sh@21 -- # val= 00:07:27.394 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.394 07:58:38 -- accel/accel.sh@21 -- # val= 00:07:27.394 07:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.394 07:58:38 -- accel/accel.sh@20 -- # read -r var val 00:07:28.331 07:58:39 -- accel/accel.sh@21 -- # val= 00:07:28.331 07:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.331 07:58:39 -- accel/accel.sh@20 -- # IFS=: 00:07:28.331 07:58:39 -- accel/accel.sh@20 -- # read -r var val 00:07:28.331 07:58:39 -- accel/accel.sh@21 -- # val= 00:07:28.331 07:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.331 07:58:39 -- accel/accel.sh@20 -- # IFS=: 00:07:28.331 07:58:39 -- accel/accel.sh@20 -- # read -r var val 00:07:28.331 07:58:39 -- accel/accel.sh@21 -- # val= 00:07:28.331 07:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.331 07:58:39 -- accel/accel.sh@20 -- # IFS=: 00:07:28.331 07:58:39 -- accel/accel.sh@20 -- # read -r var val 00:07:28.331 07:58:39 -- accel/accel.sh@21 -- # val= 00:07:28.331 07:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.331 07:58:39 -- accel/accel.sh@20 -- # IFS=: 00:07:28.331 07:58:39 -- accel/accel.sh@20 -- # read -r var val 00:07:28.331 07:58:39 -- accel/accel.sh@21 -- # val= 00:07:28.331 07:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.331 07:58:39 -- accel/accel.sh@20 -- # IFS=: 00:07:28.331 07:58:39 -- accel/accel.sh@20 -- # read -r var val 00:07:28.331 07:58:39 -- accel/accel.sh@21 -- # val= 00:07:28.331 07:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.331 07:58:39 -- accel/accel.sh@20 -- # IFS=: 00:07:28.331 07:58:39 -- accel/accel.sh@20 -- # read -r var val 00:07:28.331 07:58:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.331 07:58:39 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:28.331 07:58:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.331 00:07:28.331 real 0m2.814s 00:07:28.331 user 0m2.395s 00:07:28.331 sys 0m0.215s 00:07:28.331 07:58:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.331 07:58:39 -- common/autotest_common.sh@10 -- # set +x 00:07:28.331 ************************************ 00:07:28.331 END TEST accel_dif_generate_copy 00:07:28.331 ************************************ 00:07:28.590 07:58:39 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:28.590 07:58:39 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:28.590 07:58:39 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:28.590 07:58:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.590 07:58:39 -- common/autotest_common.sh@10 -- # set +x 00:07:28.590 ************************************ 00:07:28.590 START TEST accel_comp 00:07:28.590 ************************************ 00:07:28.590 07:58:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:28.590 07:58:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.590 07:58:39 -- accel/accel.sh@17 -- # local accel_module 00:07:28.590 07:58:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:28.590 07:58:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:28.590 07:58:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.590 07:58:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.590 07:58:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.590 07:58:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.590 07:58:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.590 07:58:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.590 07:58:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.590 07:58:39 -- accel/accel.sh@42 -- # jq -r . 00:07:28.590 [2024-12-07 07:58:39.678106] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:28.590 [2024-12-07 07:58:39.678386] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71171 ] 00:07:28.590 [2024-12-07 07:58:39.814463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.849 [2024-12-07 07:58:39.877016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.226 07:58:41 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:30.226 00:07:30.226 SPDK Configuration: 00:07:30.226 Core mask: 0x1 00:07:30.226 00:07:30.226 Accel Perf Configuration: 00:07:30.226 Workload Type: compress 00:07:30.226 Transfer size: 4096 bytes 00:07:30.226 Vector count 1 00:07:30.226 Module: software 00:07:30.226 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.226 Queue depth: 32 00:07:30.226 Allocate depth: 32 00:07:30.226 # threads/core: 1 00:07:30.226 Run time: 1 seconds 00:07:30.226 Verify: No 00:07:30.226 00:07:30.226 Running for 1 seconds... 00:07:30.226 00:07:30.226 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.226 ------------------------------------------------------------------------------------ 00:07:30.226 0,0 57280/s 238 MiB/s 0 0 00:07:30.226 ==================================================================================== 00:07:30.226 Total 57280/s 223 MiB/s 0 0' 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.226 07:58:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.226 07:58:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.226 07:58:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.226 07:58:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.226 07:58:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.226 07:58:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.226 07:58:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.226 07:58:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.226 07:58:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.226 07:58:41 -- accel/accel.sh@42 -- # jq -r . 00:07:30.226 [2024-12-07 07:58:41.092279] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:30.226 [2024-12-07 07:58:41.092371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71185 ] 00:07:30.226 [2024-12-07 07:58:41.228949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.226 [2024-12-07 07:58:41.287265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.226 07:58:41 -- accel/accel.sh@21 -- # val= 00:07:30.226 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.226 07:58:41 -- accel/accel.sh@21 -- # val= 00:07:30.226 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.226 07:58:41 -- accel/accel.sh@21 -- # val= 00:07:30.226 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.226 07:58:41 -- accel/accel.sh@21 -- # val=0x1 00:07:30.226 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.226 07:58:41 -- accel/accel.sh@21 -- # val= 00:07:30.226 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.226 07:58:41 -- accel/accel.sh@21 -- # val= 00:07:30.226 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.226 07:58:41 -- accel/accel.sh@21 -- # val=compress 00:07:30.226 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.226 07:58:41 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.226 07:58:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:30.226 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.226 07:58:41 -- accel/accel.sh@21 -- # val= 00:07:30.226 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.226 07:58:41 -- accel/accel.sh@21 -- # val=software 00:07:30.226 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.226 07:58:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.226 07:58:41 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.226 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.226 07:58:41 -- accel/accel.sh@21 -- # val=32 00:07:30.226 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.226 07:58:41 -- accel/accel.sh@21 -- # val=32 00:07:30.226 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.226 07:58:41 -- accel/accel.sh@21 -- # val=1 00:07:30.226 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.226 07:58:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.226 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.226 07:58:41 -- accel/accel.sh@21 -- # val=No 00:07:30.226 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.226 07:58:41 -- accel/accel.sh@21 -- # val= 00:07:30.226 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.226 07:58:41 -- accel/accel.sh@21 -- # val= 00:07:30.226 07:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.226 07:58:41 -- accel/accel.sh@20 -- # read -r var val 00:07:31.600 07:58:42 -- accel/accel.sh@21 -- # val= 00:07:31.600 07:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.600 07:58:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.600 07:58:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.600 07:58:42 -- accel/accel.sh@21 -- # val= 00:07:31.600 07:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.600 07:58:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.600 07:58:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.600 07:58:42 -- accel/accel.sh@21 -- # val= 00:07:31.600 07:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.600 07:58:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.600 07:58:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.600 07:58:42 -- accel/accel.sh@21 -- # val= 00:07:31.600 07:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.600 07:58:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.600 07:58:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.600 07:58:42 -- accel/accel.sh@21 -- # val= 00:07:31.600 07:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.600 07:58:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.600 07:58:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.600 07:58:42 -- accel/accel.sh@21 -- # val= 00:07:31.600 07:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.600 07:58:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.600 07:58:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.600 07:58:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:31.600 07:58:42 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:31.600 07:58:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.600 00:07:31.600 real 0m2.832s 00:07:31.600 user 0m2.414s 00:07:31.600 sys 0m0.213s 00:07:31.601 07:58:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:31.601 07:58:42 -- common/autotest_common.sh@10 -- # set +x 00:07:31.601 ************************************ 00:07:31.601 END TEST accel_comp 00:07:31.601 ************************************ 00:07:31.601 07:58:42 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:31.601 07:58:42 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:31.601 07:58:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.601 07:58:42 -- common/autotest_common.sh@10 -- # set +x 00:07:31.601 ************************************ 00:07:31.601 START TEST accel_decomp 00:07:31.601 ************************************ 00:07:31.601 07:58:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:31.601 07:58:42 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.601 07:58:42 -- accel/accel.sh@17 -- # local accel_module 00:07:31.601 07:58:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:31.601 07:58:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:31.601 07:58:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.601 07:58:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.601 07:58:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.601 07:58:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.601 07:58:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.601 07:58:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.601 07:58:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.601 07:58:42 -- accel/accel.sh@42 -- # jq -r . 00:07:31.601 [2024-12-07 07:58:42.560325] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:31.601 [2024-12-07 07:58:42.560809] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71225 ] 00:07:31.601 [2024-12-07 07:58:42.698270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.601 [2024-12-07 07:58:42.765366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.977 07:58:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:32.977 00:07:32.977 SPDK Configuration: 00:07:32.977 Core mask: 0x1 00:07:32.977 00:07:32.977 Accel Perf Configuration: 00:07:32.977 Workload Type: decompress 00:07:32.977 Transfer size: 4096 bytes 00:07:32.977 Vector count 1 00:07:32.977 Module: software 00:07:32.977 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.977 Queue depth: 32 00:07:32.977 Allocate depth: 32 00:07:32.977 # threads/core: 1 00:07:32.977 Run time: 1 seconds 00:07:32.977 Verify: Yes 00:07:32.977 00:07:32.977 Running for 1 seconds... 00:07:32.977 00:07:32.977 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:32.977 ------------------------------------------------------------------------------------ 00:07:32.977 0,0 81184/s 149 MiB/s 0 0 00:07:32.977 ==================================================================================== 00:07:32.977 Total 81184/s 317 MiB/s 0 0' 00:07:32.977 07:58:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:32.977 07:58:43 -- accel/accel.sh@20 -- # IFS=: 00:07:32.977 07:58:43 -- accel/accel.sh@20 -- # read -r var val 00:07:32.977 07:58:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.977 07:58:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:32.977 07:58:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.977 07:58:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.977 07:58:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.977 07:58:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.977 07:58:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.977 07:58:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.977 07:58:43 -- accel/accel.sh@42 -- # jq -r . 00:07:32.977 [2024-12-07 07:58:43.979135] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:32.977 [2024-12-07 07:58:43.979270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71239 ] 00:07:32.977 [2024-12-07 07:58:44.115367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.977 [2024-12-07 07:58:44.172693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.977 07:58:44 -- accel/accel.sh@21 -- # val= 00:07:32.977 07:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.977 07:58:44 -- accel/accel.sh@21 -- # val= 00:07:32.977 07:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.977 07:58:44 -- accel/accel.sh@21 -- # val= 00:07:32.977 07:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.977 07:58:44 -- accel/accel.sh@21 -- # val=0x1 00:07:32.977 07:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.977 07:58:44 -- accel/accel.sh@21 -- # val= 00:07:32.977 07:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.977 07:58:44 -- accel/accel.sh@21 -- # val= 00:07:32.977 07:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.977 07:58:44 -- accel/accel.sh@21 -- # val=decompress 00:07:32.977 07:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.977 07:58:44 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.977 07:58:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:32.977 07:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.977 07:58:44 -- accel/accel.sh@21 -- # val= 00:07:32.977 07:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.977 07:58:44 -- accel/accel.sh@21 -- # val=software 00:07:32.977 07:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.977 07:58:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.977 07:58:44 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.977 07:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.977 07:58:44 -- accel/accel.sh@21 -- # val=32 00:07:32.977 07:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.977 07:58:44 -- accel/accel.sh@21 -- # val=32 00:07:32.977 07:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.977 07:58:44 -- accel/accel.sh@21 -- # val=1 00:07:32.977 07:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.977 07:58:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:32.977 07:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.977 07:58:44 -- accel/accel.sh@21 -- # val=Yes 00:07:32.977 07:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.977 07:58:44 -- accel/accel.sh@21 -- # val= 00:07:32.977 07:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.977 07:58:44 -- accel/accel.sh@21 -- # val= 00:07:32.977 07:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # IFS=: 00:07:32.977 07:58:44 -- accel/accel.sh@20 -- # read -r var val 00:07:34.354 07:58:45 -- accel/accel.sh@21 -- # val= 00:07:34.354 07:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.354 07:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.354 07:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.354 07:58:45 -- accel/accel.sh@21 -- # val= 00:07:34.354 07:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.354 07:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.354 07:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.354 07:58:45 -- accel/accel.sh@21 -- # val= 00:07:34.354 07:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.354 07:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.354 07:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.354 07:58:45 -- accel/accel.sh@21 -- # val= 00:07:34.354 07:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.354 07:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.354 07:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.354 07:58:45 -- accel/accel.sh@21 -- # val= 00:07:34.354 07:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.354 07:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.354 07:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.354 07:58:45 -- accel/accel.sh@21 -- # val= 00:07:34.354 07:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.354 07:58:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.354 07:58:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.354 07:58:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:34.354 ************************************ 00:07:34.354 END TEST accel_decomp 00:07:34.354 ************************************ 00:07:34.354 07:58:45 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:34.354 07:58:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.354 00:07:34.354 real 0m2.836s 00:07:34.354 user 0m2.401s 00:07:34.354 sys 0m0.225s 00:07:34.354 07:58:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:34.354 07:58:45 -- common/autotest_common.sh@10 -- # set +x 00:07:34.354 07:58:45 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:34.354 07:58:45 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:34.354 07:58:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.354 07:58:45 -- common/autotest_common.sh@10 -- # set +x 00:07:34.354 ************************************ 00:07:34.354 START TEST accel_decmop_full 00:07:34.354 ************************************ 00:07:34.354 07:58:45 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:34.354 07:58:45 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.354 07:58:45 -- accel/accel.sh@17 -- # local accel_module 00:07:34.354 07:58:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:34.354 07:58:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:34.354 07:58:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.354 07:58:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.354 07:58:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.354 07:58:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.354 07:58:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.354 07:58:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.354 07:58:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.354 07:58:45 -- accel/accel.sh@42 -- # jq -r . 00:07:34.354 [2024-12-07 07:58:45.443416] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:34.354 [2024-12-07 07:58:45.443516] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71274 ] 00:07:34.354 [2024-12-07 07:58:45.580034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.612 [2024-12-07 07:58:45.649638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.990 07:58:46 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:35.990 00:07:35.990 SPDK Configuration: 00:07:35.990 Core mask: 0x1 00:07:35.990 00:07:35.990 Accel Perf Configuration: 00:07:35.990 Workload Type: decompress 00:07:35.990 Transfer size: 111250 bytes 00:07:35.990 Vector count 1 00:07:35.990 Module: software 00:07:35.990 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.990 Queue depth: 32 00:07:35.991 Allocate depth: 32 00:07:35.991 # threads/core: 1 00:07:35.991 Run time: 1 seconds 00:07:35.991 Verify: Yes 00:07:35.991 00:07:35.991 Running for 1 seconds... 00:07:35.991 00:07:35.991 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:35.991 ------------------------------------------------------------------------------------ 00:07:35.991 0,0 5408/s 223 MiB/s 0 0 00:07:35.991 ==================================================================================== 00:07:35.991 Total 5408/s 573 MiB/s 0 0' 00:07:35.991 07:58:46 -- accel/accel.sh@20 -- # IFS=: 00:07:35.991 07:58:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:35.991 07:58:46 -- accel/accel.sh@20 -- # read -r var val 00:07:35.991 07:58:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.991 07:58:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:35.991 07:58:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.991 07:58:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.991 07:58:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.991 07:58:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.991 07:58:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.991 07:58:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.991 07:58:46 -- accel/accel.sh@42 -- # jq -r . 00:07:35.991 [2024-12-07 07:58:46.880367] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:35.991 [2024-12-07 07:58:46.880458] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71293 ] 00:07:35.991 [2024-12-07 07:58:47.015391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.991 [2024-12-07 07:58:47.073030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.991 07:58:47 -- accel/accel.sh@21 -- # val= 00:07:35.991 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.991 07:58:47 -- accel/accel.sh@21 -- # val= 00:07:35.991 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.991 07:58:47 -- accel/accel.sh@21 -- # val= 00:07:35.991 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.991 07:58:47 -- accel/accel.sh@21 -- # val=0x1 00:07:35.991 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.991 07:58:47 -- accel/accel.sh@21 -- # val= 00:07:35.991 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.991 07:58:47 -- accel/accel.sh@21 -- # val= 00:07:35.991 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.991 07:58:47 -- accel/accel.sh@21 -- # val=decompress 00:07:35.991 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.991 07:58:47 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.991 07:58:47 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:35.991 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.991 07:58:47 -- accel/accel.sh@21 -- # val= 00:07:35.991 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.991 07:58:47 -- accel/accel.sh@21 -- # val=software 00:07:35.991 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.991 07:58:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.991 07:58:47 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.991 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.991 07:58:47 -- accel/accel.sh@21 -- # val=32 00:07:35.991 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.991 07:58:47 -- accel/accel.sh@21 -- # val=32 00:07:35.991 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.991 07:58:47 -- accel/accel.sh@21 -- # val=1 00:07:35.991 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.991 07:58:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:35.991 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.991 07:58:47 -- accel/accel.sh@21 -- # val=Yes 00:07:35.991 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.991 07:58:47 -- accel/accel.sh@21 -- # val= 00:07:35.991 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.991 07:58:47 -- accel/accel.sh@21 -- # val= 00:07:35.991 07:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # IFS=: 00:07:35.991 07:58:47 -- accel/accel.sh@20 -- # read -r var val 00:07:37.368 07:58:48 -- accel/accel.sh@21 -- # val= 00:07:37.368 07:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.368 07:58:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.368 07:58:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.368 07:58:48 -- accel/accel.sh@21 -- # val= 00:07:37.368 07:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.368 07:58:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.368 07:58:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.368 07:58:48 -- accel/accel.sh@21 -- # val= 00:07:37.368 07:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.368 07:58:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.368 07:58:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.368 07:58:48 -- accel/accel.sh@21 -- # val= 00:07:37.368 07:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.368 07:58:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.368 07:58:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.368 07:58:48 -- accel/accel.sh@21 -- # val= 00:07:37.368 07:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.368 07:58:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.368 07:58:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.368 07:58:48 -- accel/accel.sh@21 -- # val= 00:07:37.368 07:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.368 07:58:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.368 07:58:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.368 07:58:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.368 07:58:48 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:37.368 07:58:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.368 00:07:37.368 real 0m2.858s 00:07:37.368 user 0m2.426s 00:07:37.368 sys 0m0.223s 00:07:37.369 07:58:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:37.369 07:58:48 -- common/autotest_common.sh@10 -- # set +x 00:07:37.369 ************************************ 00:07:37.369 END TEST accel_decmop_full 00:07:37.369 ************************************ 00:07:37.369 07:58:48 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:37.369 07:58:48 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:37.369 07:58:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.369 07:58:48 -- common/autotest_common.sh@10 -- # set +x 00:07:37.369 ************************************ 00:07:37.369 START TEST accel_decomp_mcore 00:07:37.369 ************************************ 00:07:37.369 07:58:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:37.369 07:58:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.369 07:58:48 -- accel/accel.sh@17 -- # local accel_module 00:07:37.369 07:58:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:37.369 07:58:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:37.369 07:58:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.369 07:58:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.369 07:58:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.369 07:58:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.369 07:58:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.369 07:58:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.369 07:58:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.369 07:58:48 -- accel/accel.sh@42 -- # jq -r . 00:07:37.369 [2024-12-07 07:58:48.355231] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:37.369 [2024-12-07 07:58:48.355324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71322 ] 00:07:37.369 [2024-12-07 07:58:48.483885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.369 [2024-12-07 07:58:48.548407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.369 [2024-12-07 07:58:48.548531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.369 [2024-12-07 07:58:48.548666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.369 [2024-12-07 07:58:48.548668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.749 07:58:49 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:38.749 00:07:38.749 SPDK Configuration: 00:07:38.749 Core mask: 0xf 00:07:38.749 00:07:38.749 Accel Perf Configuration: 00:07:38.749 Workload Type: decompress 00:07:38.749 Transfer size: 4096 bytes 00:07:38.749 Vector count 1 00:07:38.749 Module: software 00:07:38.749 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:38.749 Queue depth: 32 00:07:38.749 Allocate depth: 32 00:07:38.749 # threads/core: 1 00:07:38.749 Run time: 1 seconds 00:07:38.749 Verify: Yes 00:07:38.749 00:07:38.749 Running for 1 seconds... 00:07:38.749 00:07:38.749 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:38.749 ------------------------------------------------------------------------------------ 00:07:38.749 0,0 64064/s 118 MiB/s 0 0 00:07:38.749 3,0 62016/s 114 MiB/s 0 0 00:07:38.749 2,0 62400/s 115 MiB/s 0 0 00:07:38.749 1,0 63232/s 116 MiB/s 0 0 00:07:38.749 ==================================================================================== 00:07:38.749 Total 251712/s 983 MiB/s 0 0' 00:07:38.749 07:58:49 -- accel/accel.sh@20 -- # IFS=: 00:07:38.749 07:58:49 -- accel/accel.sh@20 -- # read -r var val 00:07:38.749 07:58:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:38.749 07:58:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:38.749 07:58:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.749 07:58:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.749 07:58:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.749 07:58:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.749 07:58:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.749 07:58:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.749 07:58:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.749 07:58:49 -- accel/accel.sh@42 -- # jq -r . 00:07:38.749 [2024-12-07 07:58:49.771553] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:38.749 [2024-12-07 07:58:49.771699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71345 ] 00:07:38.749 [2024-12-07 07:58:49.908427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:38.749 [2024-12-07 07:58:49.964672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.749 [2024-12-07 07:58:49.964801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.749 [2024-12-07 07:58:49.964924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.749 [2024-12-07 07:58:49.964928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.006 07:58:50 -- accel/accel.sh@21 -- # val= 00:07:39.006 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.006 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.006 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.006 07:58:50 -- accel/accel.sh@21 -- # val= 00:07:39.006 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.006 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.006 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.006 07:58:50 -- accel/accel.sh@21 -- # val= 00:07:39.006 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.006 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.006 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.006 07:58:50 -- accel/accel.sh@21 -- # val=0xf 00:07:39.006 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.006 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.006 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.006 07:58:50 -- accel/accel.sh@21 -- # val= 00:07:39.006 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.006 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.006 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.006 07:58:50 -- accel/accel.sh@21 -- # val= 00:07:39.006 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.006 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.006 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.006 07:58:50 -- accel/accel.sh@21 -- # val=decompress 00:07:39.006 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.006 07:58:50 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:39.006 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.006 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.006 07:58:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:39.006 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.006 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.006 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.006 07:58:50 -- accel/accel.sh@21 -- # val= 00:07:39.006 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.007 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.007 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.007 07:58:50 -- accel/accel.sh@21 -- # val=software 00:07:39.007 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.007 07:58:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:39.007 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.007 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.007 07:58:50 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.007 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.007 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.007 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.007 07:58:50 -- accel/accel.sh@21 -- # val=32 00:07:39.007 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.007 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.007 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.007 07:58:50 -- accel/accel.sh@21 -- # val=32 00:07:39.007 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.007 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.007 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.007 07:58:50 -- accel/accel.sh@21 -- # val=1 00:07:39.007 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.007 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.007 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.007 07:58:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:39.007 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.007 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.007 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.007 07:58:50 -- accel/accel.sh@21 -- # val=Yes 00:07:39.007 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.007 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.007 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.007 07:58:50 -- accel/accel.sh@21 -- # val= 00:07:39.007 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.007 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.007 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.007 07:58:50 -- accel/accel.sh@21 -- # val= 00:07:39.007 07:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.007 07:58:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.007 07:58:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.941 07:58:51 -- accel/accel.sh@21 -- # val= 00:07:39.941 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.941 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:07:39.941 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:07:39.941 07:58:51 -- accel/accel.sh@21 -- # val= 00:07:39.941 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.941 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:07:39.941 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:07:39.941 07:58:51 -- accel/accel.sh@21 -- # val= 00:07:39.941 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.941 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:07:39.941 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:07:39.941 07:58:51 -- accel/accel.sh@21 -- # val= 00:07:39.941 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.941 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:07:39.941 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:07:39.941 07:58:51 -- accel/accel.sh@21 -- # val= 00:07:39.941 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.941 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:07:39.941 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:07:39.941 07:58:51 -- accel/accel.sh@21 -- # val= 00:07:39.941 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.941 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:07:39.941 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:07:39.941 07:58:51 -- accel/accel.sh@21 -- # val= 00:07:39.941 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.942 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:07:39.942 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:07:39.942 07:58:51 -- accel/accel.sh@21 -- # val= 00:07:39.942 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.942 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:07:39.942 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:07:39.942 07:58:51 -- accel/accel.sh@21 -- # val= 00:07:39.942 07:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.942 07:58:51 -- accel/accel.sh@20 -- # IFS=: 00:07:39.942 07:58:51 -- accel/accel.sh@20 -- # read -r var val 00:07:39.942 07:58:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:39.942 07:58:51 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:39.942 ************************************ 00:07:39.942 END TEST accel_decomp_mcore 00:07:39.942 ************************************ 00:07:39.942 07:58:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.942 00:07:39.942 real 0m2.845s 00:07:39.942 user 0m9.178s 00:07:39.942 sys 0m0.246s 00:07:39.942 07:58:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:39.942 07:58:51 -- common/autotest_common.sh@10 -- # set +x 00:07:39.942 07:58:51 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:39.942 07:58:51 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:39.942 07:58:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.942 07:58:51 -- common/autotest_common.sh@10 -- # set +x 00:07:40.200 ************************************ 00:07:40.200 START TEST accel_decomp_full_mcore 00:07:40.200 ************************************ 00:07:40.200 07:58:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.200 07:58:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.200 07:58:51 -- accel/accel.sh@17 -- # local accel_module 00:07:40.200 07:58:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.200 07:58:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:40.200 07:58:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.200 07:58:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.200 07:58:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.200 07:58:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.200 07:58:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.200 07:58:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.200 07:58:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.200 07:58:51 -- accel/accel.sh@42 -- # jq -r . 00:07:40.200 [2024-12-07 07:58:51.247702] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.200 [2024-12-07 07:58:51.247796] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71382 ] 00:07:40.200 [2024-12-07 07:58:51.381030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.200 [2024-12-07 07:58:51.436905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.200 [2024-12-07 07:58:51.437040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.200 [2024-12-07 07:58:51.437171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.200 [2024-12-07 07:58:51.437177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.574 07:58:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:41.574 00:07:41.574 SPDK Configuration: 00:07:41.574 Core mask: 0xf 00:07:41.574 00:07:41.574 Accel Perf Configuration: 00:07:41.574 Workload Type: decompress 00:07:41.574 Transfer size: 111250 bytes 00:07:41.574 Vector count 1 00:07:41.574 Module: software 00:07:41.574 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.574 Queue depth: 32 00:07:41.574 Allocate depth: 32 00:07:41.574 # threads/core: 1 00:07:41.574 Run time: 1 seconds 00:07:41.574 Verify: Yes 00:07:41.574 00:07:41.574 Running for 1 seconds... 00:07:41.574 00:07:41.574 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:41.574 ------------------------------------------------------------------------------------ 00:07:41.574 0,0 5024/s 207 MiB/s 0 0 00:07:41.574 3,0 4960/s 204 MiB/s 0 0 00:07:41.574 2,0 5056/s 208 MiB/s 0 0 00:07:41.574 1,0 5056/s 208 MiB/s 0 0 00:07:41.574 ==================================================================================== 00:07:41.574 Total 20096/s 2132 MiB/s 0 0' 00:07:41.574 07:58:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:41.574 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.574 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.574 07:58:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:41.574 07:58:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.574 07:58:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.574 07:58:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.574 07:58:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.574 07:58:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.574 07:58:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.574 07:58:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.574 07:58:52 -- accel/accel.sh@42 -- # jq -r . 00:07:41.574 [2024-12-07 07:58:52.665401] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:41.574 [2024-12-07 07:58:52.665495] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71405 ] 00:07:41.574 [2024-12-07 07:58:52.794139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.832 [2024-12-07 07:58:52.849047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.832 [2024-12-07 07:58:52.849192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.832 [2024-12-07 07:58:52.849332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.832 [2024-12-07 07:58:52.849564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.832 07:58:52 -- accel/accel.sh@21 -- # val= 00:07:41.832 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.832 07:58:52 -- accel/accel.sh@21 -- # val= 00:07:41.832 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.832 07:58:52 -- accel/accel.sh@21 -- # val= 00:07:41.832 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.832 07:58:52 -- accel/accel.sh@21 -- # val=0xf 00:07:41.832 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.832 07:58:52 -- accel/accel.sh@21 -- # val= 00:07:41.832 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.832 07:58:52 -- accel/accel.sh@21 -- # val= 00:07:41.832 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.832 07:58:52 -- accel/accel.sh@21 -- # val=decompress 00:07:41.832 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.832 07:58:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.832 07:58:52 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:41.832 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.832 07:58:52 -- accel/accel.sh@21 -- # val= 00:07:41.832 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.832 07:58:52 -- accel/accel.sh@21 -- # val=software 00:07:41.832 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.832 07:58:52 -- accel/accel.sh@23 -- # accel_module=software 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.832 07:58:52 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.832 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.832 07:58:52 -- accel/accel.sh@21 -- # val=32 00:07:41.832 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.832 07:58:52 -- accel/accel.sh@21 -- # val=32 00:07:41.832 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.832 07:58:52 -- accel/accel.sh@21 -- # val=1 00:07:41.832 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.832 07:58:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:41.832 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.832 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.833 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.833 07:58:52 -- accel/accel.sh@21 -- # val=Yes 00:07:41.833 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.833 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.833 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.833 07:58:52 -- accel/accel.sh@21 -- # val= 00:07:41.833 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.833 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.833 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.833 07:58:52 -- accel/accel.sh@21 -- # val= 00:07:41.833 07:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.833 07:58:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.833 07:58:52 -- accel/accel.sh@20 -- # read -r var val 00:07:43.203 07:58:54 -- accel/accel.sh@21 -- # val= 00:07:43.203 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.203 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:43.203 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:43.203 07:58:54 -- accel/accel.sh@21 -- # val= 00:07:43.203 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.203 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:43.203 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:43.203 07:58:54 -- accel/accel.sh@21 -- # val= 00:07:43.203 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.203 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:43.203 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:43.203 07:58:54 -- accel/accel.sh@21 -- # val= 00:07:43.203 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.203 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:43.203 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:43.203 07:58:54 -- accel/accel.sh@21 -- # val= 00:07:43.203 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.203 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:43.203 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:43.203 07:58:54 -- accel/accel.sh@21 -- # val= 00:07:43.203 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.203 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:43.203 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:43.203 07:58:54 -- accel/accel.sh@21 -- # val= 00:07:43.203 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.203 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:43.203 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:43.203 07:58:54 -- accel/accel.sh@21 -- # val= 00:07:43.203 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.203 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:43.203 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:43.203 07:58:54 -- accel/accel.sh@21 -- # val= 00:07:43.203 07:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.203 07:58:54 -- accel/accel.sh@20 -- # IFS=: 00:07:43.203 07:58:54 -- accel/accel.sh@20 -- # read -r var val 00:07:43.203 07:58:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:43.203 07:58:54 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:43.203 07:58:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.203 00:07:43.203 real 0m2.859s 00:07:43.203 user 0m9.278s 00:07:43.203 sys 0m0.228s 00:07:43.203 07:58:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.203 ************************************ 00:07:43.203 END TEST accel_decomp_full_mcore 00:07:43.203 ************************************ 00:07:43.203 07:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:43.203 07:58:54 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:43.203 07:58:54 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:43.203 07:58:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.203 07:58:54 -- common/autotest_common.sh@10 -- # set +x 00:07:43.203 ************************************ 00:07:43.203 START TEST accel_decomp_mthread 00:07:43.203 ************************************ 00:07:43.203 07:58:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:43.203 07:58:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.203 07:58:54 -- accel/accel.sh@17 -- # local accel_module 00:07:43.203 07:58:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:43.203 07:58:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:43.203 07:58:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.203 07:58:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.203 07:58:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.203 07:58:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.203 07:58:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.203 07:58:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.203 07:58:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.203 07:58:54 -- accel/accel.sh@42 -- # jq -r . 00:07:43.203 [2024-12-07 07:58:54.153260] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:43.203 [2024-12-07 07:58:54.153355] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71442 ] 00:07:43.203 [2024-12-07 07:58:54.286501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.203 [2024-12-07 07:58:54.349904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.575 07:58:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:44.575 00:07:44.575 SPDK Configuration: 00:07:44.575 Core mask: 0x1 00:07:44.575 00:07:44.575 Accel Perf Configuration: 00:07:44.575 Workload Type: decompress 00:07:44.575 Transfer size: 4096 bytes 00:07:44.575 Vector count 1 00:07:44.575 Module: software 00:07:44.575 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:44.575 Queue depth: 32 00:07:44.575 Allocate depth: 32 00:07:44.575 # threads/core: 2 00:07:44.575 Run time: 1 seconds 00:07:44.575 Verify: Yes 00:07:44.575 00:07:44.575 Running for 1 seconds... 00:07:44.575 00:07:44.575 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:44.575 ------------------------------------------------------------------------------------ 00:07:44.575 0,1 40288/s 74 MiB/s 0 0 00:07:44.575 0,0 40160/s 74 MiB/s 0 0 00:07:44.575 ==================================================================================== 00:07:44.575 Total 80448/s 314 MiB/s 0 0' 00:07:44.575 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:07:44.575 07:58:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:44.575 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:07:44.575 07:58:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.575 07:58:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:44.575 07:58:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.575 07:58:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.575 07:58:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.575 07:58:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.575 07:58:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.575 07:58:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.575 07:58:55 -- accel/accel.sh@42 -- # jq -r . 00:07:44.575 [2024-12-07 07:58:55.568904] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:44.575 [2024-12-07 07:58:55.569011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71456 ] 00:07:44.575 [2024-12-07 07:58:55.704545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.575 [2024-12-07 07:58:55.767406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.575 07:58:55 -- accel/accel.sh@21 -- # val= 00:07:44.575 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.575 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:07:44.576 07:58:55 -- accel/accel.sh@21 -- # val= 00:07:44.576 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:07:44.576 07:58:55 -- accel/accel.sh@21 -- # val= 00:07:44.576 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:07:44.576 07:58:55 -- accel/accel.sh@21 -- # val=0x1 00:07:44.576 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:07:44.576 07:58:55 -- accel/accel.sh@21 -- # val= 00:07:44.576 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:07:44.576 07:58:55 -- accel/accel.sh@21 -- # val= 00:07:44.576 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:07:44.576 07:58:55 -- accel/accel.sh@21 -- # val=decompress 00:07:44.576 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.576 07:58:55 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:07:44.576 07:58:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:44.576 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:07:44.576 07:58:55 -- accel/accel.sh@21 -- # val= 00:07:44.576 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:07:44.576 07:58:55 -- accel/accel.sh@21 -- # val=software 00:07:44.576 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.576 07:58:55 -- accel/accel.sh@23 -- # accel_module=software 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:07:44.576 07:58:55 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:44.576 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:07:44.576 07:58:55 -- accel/accel.sh@21 -- # val=32 00:07:44.576 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:07:44.576 07:58:55 -- accel/accel.sh@21 -- # val=32 00:07:44.576 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:07:44.576 07:58:55 -- accel/accel.sh@21 -- # val=2 00:07:44.576 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:07:44.576 07:58:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:44.576 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:07:44.576 07:58:55 -- accel/accel.sh@21 -- # val=Yes 00:07:44.576 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:07:44.576 07:58:55 -- accel/accel.sh@21 -- # val= 00:07:44.576 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:07:44.576 07:58:55 -- accel/accel.sh@21 -- # val= 00:07:44.576 07:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # IFS=: 00:07:44.576 07:58:55 -- accel/accel.sh@20 -- # read -r var val 00:07:45.953 07:58:56 -- accel/accel.sh@21 -- # val= 00:07:45.953 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.953 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:07:45.953 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:07:45.953 07:58:56 -- accel/accel.sh@21 -- # val= 00:07:45.953 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.953 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:07:45.953 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:07:45.953 07:58:56 -- accel/accel.sh@21 -- # val= 00:07:45.953 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.953 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:07:45.953 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:07:45.953 07:58:56 -- accel/accel.sh@21 -- # val= 00:07:45.953 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.953 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:07:45.953 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:07:45.953 07:58:56 -- accel/accel.sh@21 -- # val= 00:07:45.953 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.953 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:07:45.953 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:07:45.953 07:58:56 -- accel/accel.sh@21 -- # val= 00:07:45.953 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.953 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:07:45.953 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:07:45.953 07:58:56 -- accel/accel.sh@21 -- # val= 00:07:45.953 07:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.953 07:58:56 -- accel/accel.sh@20 -- # IFS=: 00:07:45.953 07:58:56 -- accel/accel.sh@20 -- # read -r var val 00:07:45.953 07:58:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:45.953 07:58:56 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:45.953 07:58:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.953 00:07:45.953 real 0m2.839s 00:07:45.953 user 0m2.422s 00:07:45.953 sys 0m0.215s 00:07:45.953 07:58:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.953 ************************************ 00:07:45.953 END TEST accel_decomp_mthread 00:07:45.953 ************************************ 00:07:45.953 07:58:56 -- common/autotest_common.sh@10 -- # set +x 00:07:45.953 07:58:57 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:45.953 07:58:57 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:45.953 07:58:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.953 07:58:57 -- common/autotest_common.sh@10 -- # set +x 00:07:45.953 ************************************ 00:07:45.953 START TEST accel_deomp_full_mthread 00:07:45.953 ************************************ 00:07:45.953 07:58:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:45.953 07:58:57 -- accel/accel.sh@16 -- # local accel_opc 00:07:45.953 07:58:57 -- accel/accel.sh@17 -- # local accel_module 00:07:45.953 07:58:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:45.953 07:58:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:45.953 07:58:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.953 07:58:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.953 07:58:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.953 07:58:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.953 07:58:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.953 07:58:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.953 07:58:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.953 07:58:57 -- accel/accel.sh@42 -- # jq -r . 00:07:45.953 [2024-12-07 07:58:57.045581] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.953 [2024-12-07 07:58:57.045691] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71491 ] 00:07:45.953 [2024-12-07 07:58:57.181793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.213 [2024-12-07 07:58:57.255043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.594 07:58:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:47.594 00:07:47.594 SPDK Configuration: 00:07:47.594 Core mask: 0x1 00:07:47.594 00:07:47.594 Accel Perf Configuration: 00:07:47.594 Workload Type: decompress 00:07:47.594 Transfer size: 111250 bytes 00:07:47.594 Vector count 1 00:07:47.594 Module: software 00:07:47.594 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.594 Queue depth: 32 00:07:47.594 Allocate depth: 32 00:07:47.594 # threads/core: 2 00:07:47.594 Run time: 1 seconds 00:07:47.594 Verify: Yes 00:07:47.594 00:07:47.594 Running for 1 seconds... 00:07:47.594 00:07:47.594 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:47.594 ------------------------------------------------------------------------------------ 00:07:47.594 0,1 2784/s 115 MiB/s 0 0 00:07:47.594 0,0 2752/s 113 MiB/s 0 0 00:07:47.594 ==================================================================================== 00:07:47.594 Total 5536/s 587 MiB/s 0 0' 00:07:47.594 07:58:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:07:47.594 07:58:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:47.594 07:58:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.594 07:58:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.594 07:58:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.594 07:58:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.594 07:58:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.594 07:58:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.594 07:58:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.594 07:58:58 -- accel/accel.sh@42 -- # jq -r . 00:07:47.594 [2024-12-07 07:58:58.486554] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:47.594 [2024-12-07 07:58:58.486656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71510 ] 00:07:47.594 [2024-12-07 07:58:58.617206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.594 [2024-12-07 07:58:58.684985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.594 07:58:58 -- accel/accel.sh@21 -- # val= 00:07:47.594 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:07:47.594 07:58:58 -- accel/accel.sh@21 -- # val= 00:07:47.594 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:07:47.594 07:58:58 -- accel/accel.sh@21 -- # val= 00:07:47.594 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:07:47.594 07:58:58 -- accel/accel.sh@21 -- # val=0x1 00:07:47.594 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:07:47.594 07:58:58 -- accel/accel.sh@21 -- # val= 00:07:47.594 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:07:47.594 07:58:58 -- accel/accel.sh@21 -- # val= 00:07:47.594 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:07:47.594 07:58:58 -- accel/accel.sh@21 -- # val=decompress 00:07:47.594 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.594 07:58:58 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:07:47.594 07:58:58 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:47.594 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:07:47.594 07:58:58 -- accel/accel.sh@21 -- # val= 00:07:47.594 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:07:47.594 07:58:58 -- accel/accel.sh@21 -- # val=software 00:07:47.594 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.594 07:58:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:07:47.594 07:58:58 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.594 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:07:47.594 07:58:58 -- accel/accel.sh@21 -- # val=32 00:07:47.594 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:07:47.594 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:07:47.594 07:58:58 -- accel/accel.sh@21 -- # val=32 00:07:47.595 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.595 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:07:47.595 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:07:47.595 07:58:58 -- accel/accel.sh@21 -- # val=2 00:07:47.595 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.595 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:07:47.595 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:07:47.595 07:58:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:47.595 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.595 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:07:47.595 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:07:47.595 07:58:58 -- accel/accel.sh@21 -- # val=Yes 00:07:47.595 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.595 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:07:47.595 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:07:47.595 07:58:58 -- accel/accel.sh@21 -- # val= 00:07:47.595 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.595 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:07:47.595 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:07:47.595 07:58:58 -- accel/accel.sh@21 -- # val= 00:07:47.595 07:58:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.595 07:58:58 -- accel/accel.sh@20 -- # IFS=: 00:07:47.595 07:58:58 -- accel/accel.sh@20 -- # read -r var val 00:07:48.973 07:58:59 -- accel/accel.sh@21 -- # val= 00:07:48.973 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.973 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:48.973 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:48.973 07:58:59 -- accel/accel.sh@21 -- # val= 00:07:48.973 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.973 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:48.973 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:48.973 07:58:59 -- accel/accel.sh@21 -- # val= 00:07:48.973 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.973 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:48.973 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:48.973 07:58:59 -- accel/accel.sh@21 -- # val= 00:07:48.973 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.973 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:48.973 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:48.973 07:58:59 -- accel/accel.sh@21 -- # val= 00:07:48.974 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.974 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:48.974 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:48.974 07:58:59 -- accel/accel.sh@21 -- # val= 00:07:48.974 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.974 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:48.974 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:48.974 07:58:59 -- accel/accel.sh@21 -- # val= 00:07:48.974 07:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.974 07:58:59 -- accel/accel.sh@20 -- # IFS=: 00:07:48.974 07:58:59 -- accel/accel.sh@20 -- # read -r var val 00:07:48.974 07:58:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:48.974 07:58:59 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:48.974 07:58:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.974 00:07:48.974 real 0m2.889s 00:07:48.974 user 0m2.460s 00:07:48.974 sys 0m0.227s 00:07:48.974 07:58:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.974 ************************************ 00:07:48.974 END TEST accel_deomp_full_mthread 00:07:48.974 ************************************ 00:07:48.974 07:58:59 -- common/autotest_common.sh@10 -- # set +x 00:07:48.974 07:58:59 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:48.974 07:58:59 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:48.974 07:58:59 -- accel/accel.sh@129 -- # build_accel_config 00:07:48.974 07:58:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.974 07:58:59 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:48.974 07:58:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.974 07:58:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.974 07:58:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.974 07:58:59 -- common/autotest_common.sh@10 -- # set +x 00:07:48.974 07:58:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.974 07:58:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.974 07:58:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.974 07:58:59 -- accel/accel.sh@42 -- # jq -r . 00:07:48.974 ************************************ 00:07:48.974 START TEST accel_dif_functional_tests 00:07:48.974 ************************************ 00:07:48.974 07:58:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:48.974 [2024-12-07 07:59:00.011301] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:48.974 [2024-12-07 07:59:00.011577] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71546 ] 00:07:48.974 [2024-12-07 07:59:00.150960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:48.974 [2024-12-07 07:59:00.216838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.974 [2024-12-07 07:59:00.216967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.974 [2024-12-07 07:59:00.216971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.232 00:07:49.232 00:07:49.232 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.232 http://cunit.sourceforge.net/ 00:07:49.232 00:07:49.232 00:07:49.232 Suite: accel_dif 00:07:49.232 Test: verify: DIF generated, GUARD check ...passed 00:07:49.232 Test: verify: DIF generated, APPTAG check ...passed 00:07:49.232 Test: verify: DIF generated, REFTAG check ...passed 00:07:49.232 Test: verify: DIF not generated, GUARD check ...[2024-12-07 07:59:00.303357] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:49.232 passed 00:07:49.232 Test: verify: DIF not generated, APPTAG check ...[2024-12-07 07:59:00.303576] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:49.232 passed 00:07:49.232 Test: verify: DIF not generated, REFTAG check ...[2024-12-07 07:59:00.303621] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:49.232 [2024-12-07 07:59:00.303769] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:49.232 passed 00:07:49.232 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:49.232 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:49.232 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-12-07 07:59:00.303806] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:49.232 [2024-12-07 07:59:00.303832] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:49.232 [2024-12-07 07:59:00.303963] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:49.232 passed 00:07:49.232 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:49.232 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:49.232 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-07 07:59:00.304247] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:49.232 passed 00:07:49.232 Test: generate copy: DIF generated, GUARD check ...passed 00:07:49.232 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:49.232 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:49.232 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:49.232 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:49.232 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:49.232 Test: generate copy: iovecs-len validate ...passed 00:07:49.232 Test: generate copy: buffer alignment validate ...[2024-12-07 07:59:00.304914] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:49.232 passed 00:07:49.232 00:07:49.232 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.232 suites 1 1 n/a 0 0 00:07:49.232 tests 20 20 20 0 0 00:07:49.232 asserts 204 204 204 0 n/a 00:07:49.232 00:07:49.232 Elapsed time = 0.004 seconds 00:07:49.232 00:07:49.232 real 0m0.526s 00:07:49.232 user 0m0.719s 00:07:49.232 sys 0m0.140s 00:07:49.232 ************************************ 00:07:49.232 END TEST accel_dif_functional_tests 00:07:49.232 ************************************ 00:07:49.232 07:59:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.232 07:59:00 -- common/autotest_common.sh@10 -- # set +x 00:07:49.491 00:07:49.491 real 1m1.149s 00:07:49.491 user 1m5.419s 00:07:49.491 sys 0m6.065s 00:07:49.491 07:59:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.491 07:59:00 -- common/autotest_common.sh@10 -- # set +x 00:07:49.491 ************************************ 00:07:49.491 END TEST accel 00:07:49.491 ************************************ 00:07:49.491 07:59:00 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:49.491 07:59:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:49.491 07:59:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.491 07:59:00 -- common/autotest_common.sh@10 -- # set +x 00:07:49.491 ************************************ 00:07:49.491 START TEST accel_rpc 00:07:49.491 ************************************ 00:07:49.491 07:59:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:49.491 * Looking for test storage... 00:07:49.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:49.491 07:59:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:49.491 07:59:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:49.491 07:59:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:49.491 07:59:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:49.491 07:59:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:49.491 07:59:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:49.491 07:59:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:49.491 07:59:00 -- scripts/common.sh@335 -- # IFS=.-: 00:07:49.491 07:59:00 -- scripts/common.sh@335 -- # read -ra ver1 00:07:49.491 07:59:00 -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.491 07:59:00 -- scripts/common.sh@336 -- # read -ra ver2 00:07:49.491 07:59:00 -- scripts/common.sh@337 -- # local 'op=<' 00:07:49.491 07:59:00 -- scripts/common.sh@339 -- # ver1_l=2 00:07:49.491 07:59:00 -- scripts/common.sh@340 -- # ver2_l=1 00:07:49.491 07:59:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:49.491 07:59:00 -- scripts/common.sh@343 -- # case "$op" in 00:07:49.491 07:59:00 -- scripts/common.sh@344 -- # : 1 00:07:49.491 07:59:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:49.492 07:59:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.492 07:59:00 -- scripts/common.sh@364 -- # decimal 1 00:07:49.492 07:59:00 -- scripts/common.sh@352 -- # local d=1 00:07:49.492 07:59:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.492 07:59:00 -- scripts/common.sh@354 -- # echo 1 00:07:49.492 07:59:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:49.492 07:59:00 -- scripts/common.sh@365 -- # decimal 2 00:07:49.492 07:59:00 -- scripts/common.sh@352 -- # local d=2 00:07:49.492 07:59:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.492 07:59:00 -- scripts/common.sh@354 -- # echo 2 00:07:49.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.492 07:59:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:49.492 07:59:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:49.492 07:59:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:49.492 07:59:00 -- scripts/common.sh@367 -- # return 0 00:07:49.492 07:59:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.492 07:59:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:49.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.492 --rc genhtml_branch_coverage=1 00:07:49.492 --rc genhtml_function_coverage=1 00:07:49.492 --rc genhtml_legend=1 00:07:49.492 --rc geninfo_all_blocks=1 00:07:49.492 --rc geninfo_unexecuted_blocks=1 00:07:49.492 00:07:49.492 ' 00:07:49.492 07:59:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:49.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.492 --rc genhtml_branch_coverage=1 00:07:49.492 --rc genhtml_function_coverage=1 00:07:49.492 --rc genhtml_legend=1 00:07:49.492 --rc geninfo_all_blocks=1 00:07:49.492 --rc geninfo_unexecuted_blocks=1 00:07:49.492 00:07:49.492 ' 00:07:49.492 07:59:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:49.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.492 --rc genhtml_branch_coverage=1 00:07:49.492 --rc genhtml_function_coverage=1 00:07:49.492 --rc genhtml_legend=1 00:07:49.492 --rc geninfo_all_blocks=1 00:07:49.492 --rc geninfo_unexecuted_blocks=1 00:07:49.492 00:07:49.492 ' 00:07:49.492 07:59:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:49.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.492 --rc genhtml_branch_coverage=1 00:07:49.492 --rc genhtml_function_coverage=1 00:07:49.492 --rc genhtml_legend=1 00:07:49.492 --rc geninfo_all_blocks=1 00:07:49.492 --rc geninfo_unexecuted_blocks=1 00:07:49.492 00:07:49.492 ' 00:07:49.492 07:59:00 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:49.492 07:59:00 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71623 00:07:49.492 07:59:00 -- accel/accel_rpc.sh@15 -- # waitforlisten 71623 00:07:49.492 07:59:00 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:49.492 07:59:00 -- common/autotest_common.sh@829 -- # '[' -z 71623 ']' 00:07:49.492 07:59:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.492 07:59:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.492 07:59:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.492 07:59:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.492 07:59:00 -- common/autotest_common.sh@10 -- # set +x 00:07:49.750 [2024-12-07 07:59:00.784341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:49.751 [2024-12-07 07:59:00.784652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71623 ] 00:07:49.751 [2024-12-07 07:59:00.911474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.751 [2024-12-07 07:59:00.992422] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:49.751 [2024-12-07 07:59:00.992869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.688 07:59:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:50.688 07:59:01 -- common/autotest_common.sh@862 -- # return 0 00:07:50.688 07:59:01 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:50.688 07:59:01 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:50.688 07:59:01 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:50.688 07:59:01 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:50.688 07:59:01 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:50.688 07:59:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:50.688 07:59:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.688 07:59:01 -- common/autotest_common.sh@10 -- # set +x 00:07:50.688 ************************************ 00:07:50.688 START TEST accel_assign_opcode 00:07:50.688 ************************************ 00:07:50.688 07:59:01 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:50.688 07:59:01 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:50.688 07:59:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.688 07:59:01 -- common/autotest_common.sh@10 -- # set +x 00:07:50.688 [2024-12-07 07:59:01.797454] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:50.688 07:59:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.688 07:59:01 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:50.688 07:59:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.688 07:59:01 -- common/autotest_common.sh@10 -- # set +x 00:07:50.688 [2024-12-07 07:59:01.805449] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:50.688 07:59:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.688 07:59:01 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:50.688 07:59:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.688 07:59:01 -- common/autotest_common.sh@10 -- # set +x 00:07:50.947 07:59:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.947 07:59:02 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:50.947 07:59:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.947 07:59:02 -- common/autotest_common.sh@10 -- # set +x 00:07:50.947 07:59:02 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:50.947 07:59:02 -- accel/accel_rpc.sh@42 -- # grep software 00:07:50.947 07:59:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.947 software 00:07:50.947 ************************************ 00:07:50.947 END TEST accel_assign_opcode 00:07:50.947 ************************************ 00:07:50.947 00:07:50.947 real 0m0.288s 00:07:50.947 user 0m0.060s 00:07:50.947 sys 0m0.006s 00:07:50.947 07:59:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:50.947 07:59:02 -- common/autotest_common.sh@10 -- # set +x 00:07:50.947 07:59:02 -- accel/accel_rpc.sh@55 -- # killprocess 71623 00:07:50.947 07:59:02 -- common/autotest_common.sh@936 -- # '[' -z 71623 ']' 00:07:50.947 07:59:02 -- common/autotest_common.sh@940 -- # kill -0 71623 00:07:50.947 07:59:02 -- common/autotest_common.sh@941 -- # uname 00:07:50.947 07:59:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:50.947 07:59:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71623 00:07:50.947 killing process with pid 71623 00:07:50.947 07:59:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:50.947 07:59:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:50.947 07:59:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71623' 00:07:50.947 07:59:02 -- common/autotest_common.sh@955 -- # kill 71623 00:07:50.947 07:59:02 -- common/autotest_common.sh@960 -- # wait 71623 00:07:51.515 00:07:51.515 real 0m1.931s 00:07:51.515 user 0m2.053s 00:07:51.515 sys 0m0.444s 00:07:51.515 07:59:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.515 07:59:02 -- common/autotest_common.sh@10 -- # set +x 00:07:51.515 ************************************ 00:07:51.515 END TEST accel_rpc 00:07:51.515 ************************************ 00:07:51.515 07:59:02 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:51.515 07:59:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:51.515 07:59:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.515 07:59:02 -- common/autotest_common.sh@10 -- # set +x 00:07:51.515 ************************************ 00:07:51.515 START TEST app_cmdline 00:07:51.515 ************************************ 00:07:51.515 07:59:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:51.515 * Looking for test storage... 00:07:51.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:51.515 07:59:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:51.515 07:59:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:51.515 07:59:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:51.515 07:59:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:51.515 07:59:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:51.515 07:59:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:51.515 07:59:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:51.515 07:59:02 -- scripts/common.sh@335 -- # IFS=.-: 00:07:51.515 07:59:02 -- scripts/common.sh@335 -- # read -ra ver1 00:07:51.515 07:59:02 -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.515 07:59:02 -- scripts/common.sh@336 -- # read -ra ver2 00:07:51.515 07:59:02 -- scripts/common.sh@337 -- # local 'op=<' 00:07:51.515 07:59:02 -- scripts/common.sh@339 -- # ver1_l=2 00:07:51.515 07:59:02 -- scripts/common.sh@340 -- # ver2_l=1 00:07:51.515 07:59:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:51.515 07:59:02 -- scripts/common.sh@343 -- # case "$op" in 00:07:51.515 07:59:02 -- scripts/common.sh@344 -- # : 1 00:07:51.515 07:59:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:51.515 07:59:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.515 07:59:02 -- scripts/common.sh@364 -- # decimal 1 00:07:51.515 07:59:02 -- scripts/common.sh@352 -- # local d=1 00:07:51.515 07:59:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.515 07:59:02 -- scripts/common.sh@354 -- # echo 1 00:07:51.515 07:59:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:51.515 07:59:02 -- scripts/common.sh@365 -- # decimal 2 00:07:51.515 07:59:02 -- scripts/common.sh@352 -- # local d=2 00:07:51.515 07:59:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.515 07:59:02 -- scripts/common.sh@354 -- # echo 2 00:07:51.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.516 07:59:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:51.516 07:59:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:51.516 07:59:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:51.516 07:59:02 -- scripts/common.sh@367 -- # return 0 00:07:51.516 07:59:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.516 07:59:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:51.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.516 --rc genhtml_branch_coverage=1 00:07:51.516 --rc genhtml_function_coverage=1 00:07:51.516 --rc genhtml_legend=1 00:07:51.516 --rc geninfo_all_blocks=1 00:07:51.516 --rc geninfo_unexecuted_blocks=1 00:07:51.516 00:07:51.516 ' 00:07:51.516 07:59:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:51.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.516 --rc genhtml_branch_coverage=1 00:07:51.516 --rc genhtml_function_coverage=1 00:07:51.516 --rc genhtml_legend=1 00:07:51.516 --rc geninfo_all_blocks=1 00:07:51.516 --rc geninfo_unexecuted_blocks=1 00:07:51.516 00:07:51.516 ' 00:07:51.516 07:59:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:51.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.516 --rc genhtml_branch_coverage=1 00:07:51.516 --rc genhtml_function_coverage=1 00:07:51.516 --rc genhtml_legend=1 00:07:51.516 --rc geninfo_all_blocks=1 00:07:51.516 --rc geninfo_unexecuted_blocks=1 00:07:51.516 00:07:51.516 ' 00:07:51.516 07:59:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:51.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.516 --rc genhtml_branch_coverage=1 00:07:51.516 --rc genhtml_function_coverage=1 00:07:51.516 --rc genhtml_legend=1 00:07:51.516 --rc geninfo_all_blocks=1 00:07:51.516 --rc geninfo_unexecuted_blocks=1 00:07:51.516 00:07:51.516 ' 00:07:51.516 07:59:02 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:51.516 07:59:02 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71741 00:07:51.516 07:59:02 -- app/cmdline.sh@18 -- # waitforlisten 71741 00:07:51.516 07:59:02 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:51.516 07:59:02 -- common/autotest_common.sh@829 -- # '[' -z 71741 ']' 00:07:51.516 07:59:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.516 07:59:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.516 07:59:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.516 07:59:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.516 07:59:02 -- common/autotest_common.sh@10 -- # set +x 00:07:51.794 [2024-12-07 07:59:02.806501] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:51.794 [2024-12-07 07:59:02.806858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71741 ] 00:07:51.794 [2024-12-07 07:59:02.944946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.794 [2024-12-07 07:59:03.021583] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:51.794 [2024-12-07 07:59:03.022019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.758 07:59:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.758 07:59:03 -- common/autotest_common.sh@862 -- # return 0 00:07:52.758 07:59:03 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:53.016 { 00:07:53.016 "fields": { 00:07:53.016 "commit": "c13c99a5e", 00:07:53.016 "major": 24, 00:07:53.016 "minor": 1, 00:07:53.016 "patch": 1, 00:07:53.016 "suffix": "-pre" 00:07:53.016 }, 00:07:53.016 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e" 00:07:53.016 } 00:07:53.016 07:59:04 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:53.016 07:59:04 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:53.016 07:59:04 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:53.016 07:59:04 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:53.016 07:59:04 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:53.016 07:59:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.016 07:59:04 -- common/autotest_common.sh@10 -- # set +x 00:07:53.016 07:59:04 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:53.016 07:59:04 -- app/cmdline.sh@26 -- # sort 00:07:53.016 07:59:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.016 07:59:04 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:53.016 07:59:04 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:53.016 07:59:04 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:53.016 07:59:04 -- common/autotest_common.sh@650 -- # local es=0 00:07:53.016 07:59:04 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:53.016 07:59:04 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.016 07:59:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.016 07:59:04 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.016 07:59:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.016 07:59:04 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.016 07:59:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:53.016 07:59:04 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.016 07:59:04 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:53.016 07:59:04 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:53.275 2024/12/07 07:59:04 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:53.275 request: 00:07:53.275 { 00:07:53.275 "method": "env_dpdk_get_mem_stats", 00:07:53.275 "params": {} 00:07:53.275 } 00:07:53.275 Got JSON-RPC error response 00:07:53.275 GoRPCClient: error on JSON-RPC call 00:07:53.275 07:59:04 -- common/autotest_common.sh@653 -- # es=1 00:07:53.275 07:59:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:53.275 07:59:04 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:53.275 07:59:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:53.275 07:59:04 -- app/cmdline.sh@1 -- # killprocess 71741 00:07:53.275 07:59:04 -- common/autotest_common.sh@936 -- # '[' -z 71741 ']' 00:07:53.275 07:59:04 -- common/autotest_common.sh@940 -- # kill -0 71741 00:07:53.275 07:59:04 -- common/autotest_common.sh@941 -- # uname 00:07:53.276 07:59:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:53.276 07:59:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71741 00:07:53.276 07:59:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:53.276 07:59:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:53.276 killing process with pid 71741 00:07:53.276 07:59:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71741' 00:07:53.276 07:59:04 -- common/autotest_common.sh@955 -- # kill 71741 00:07:53.276 07:59:04 -- common/autotest_common.sh@960 -- # wait 71741 00:07:53.844 00:07:53.844 real 0m2.277s 00:07:53.844 user 0m2.828s 00:07:53.844 sys 0m0.541s 00:07:53.844 07:59:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.844 07:59:04 -- common/autotest_common.sh@10 -- # set +x 00:07:53.844 ************************************ 00:07:53.844 END TEST app_cmdline 00:07:53.844 ************************************ 00:07:53.844 07:59:04 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:53.844 07:59:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:53.844 07:59:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.844 07:59:04 -- common/autotest_common.sh@10 -- # set +x 00:07:53.844 ************************************ 00:07:53.844 START TEST version 00:07:53.844 ************************************ 00:07:53.844 07:59:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:53.844 * Looking for test storage... 00:07:53.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:53.844 07:59:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:53.844 07:59:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:53.844 07:59:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:53.844 07:59:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:53.844 07:59:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:53.844 07:59:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:53.844 07:59:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:53.844 07:59:05 -- scripts/common.sh@335 -- # IFS=.-: 00:07:53.844 07:59:05 -- scripts/common.sh@335 -- # read -ra ver1 00:07:53.844 07:59:05 -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.844 07:59:05 -- scripts/common.sh@336 -- # read -ra ver2 00:07:53.844 07:59:05 -- scripts/common.sh@337 -- # local 'op=<' 00:07:53.844 07:59:05 -- scripts/common.sh@339 -- # ver1_l=2 00:07:53.844 07:59:05 -- scripts/common.sh@340 -- # ver2_l=1 00:07:53.844 07:59:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:53.844 07:59:05 -- scripts/common.sh@343 -- # case "$op" in 00:07:53.844 07:59:05 -- scripts/common.sh@344 -- # : 1 00:07:53.844 07:59:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:53.844 07:59:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.844 07:59:05 -- scripts/common.sh@364 -- # decimal 1 00:07:53.844 07:59:05 -- scripts/common.sh@352 -- # local d=1 00:07:53.844 07:59:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.844 07:59:05 -- scripts/common.sh@354 -- # echo 1 00:07:53.844 07:59:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:53.844 07:59:05 -- scripts/common.sh@365 -- # decimal 2 00:07:53.844 07:59:05 -- scripts/common.sh@352 -- # local d=2 00:07:53.844 07:59:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.844 07:59:05 -- scripts/common.sh@354 -- # echo 2 00:07:53.844 07:59:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:53.844 07:59:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:53.844 07:59:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:53.844 07:59:05 -- scripts/common.sh@367 -- # return 0 00:07:53.844 07:59:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.844 07:59:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:53.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.844 --rc genhtml_branch_coverage=1 00:07:53.844 --rc genhtml_function_coverage=1 00:07:53.844 --rc genhtml_legend=1 00:07:53.844 --rc geninfo_all_blocks=1 00:07:53.844 --rc geninfo_unexecuted_blocks=1 00:07:53.844 00:07:53.844 ' 00:07:53.844 07:59:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:53.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.844 --rc genhtml_branch_coverage=1 00:07:53.844 --rc genhtml_function_coverage=1 00:07:53.844 --rc genhtml_legend=1 00:07:53.844 --rc geninfo_all_blocks=1 00:07:53.844 --rc geninfo_unexecuted_blocks=1 00:07:53.844 00:07:53.844 ' 00:07:53.844 07:59:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:53.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.844 --rc genhtml_branch_coverage=1 00:07:53.844 --rc genhtml_function_coverage=1 00:07:53.844 --rc genhtml_legend=1 00:07:53.844 --rc geninfo_all_blocks=1 00:07:53.844 --rc geninfo_unexecuted_blocks=1 00:07:53.844 00:07:53.844 ' 00:07:53.844 07:59:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:53.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.844 --rc genhtml_branch_coverage=1 00:07:53.844 --rc genhtml_function_coverage=1 00:07:53.844 --rc genhtml_legend=1 00:07:53.844 --rc geninfo_all_blocks=1 00:07:53.844 --rc geninfo_unexecuted_blocks=1 00:07:53.844 00:07:53.844 ' 00:07:53.844 07:59:05 -- app/version.sh@17 -- # get_header_version major 00:07:53.844 07:59:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:53.844 07:59:05 -- app/version.sh@14 -- # cut -f2 00:07:53.844 07:59:05 -- app/version.sh@14 -- # tr -d '"' 00:07:53.844 07:59:05 -- app/version.sh@17 -- # major=24 00:07:53.844 07:59:05 -- app/version.sh@18 -- # get_header_version minor 00:07:53.844 07:59:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:53.844 07:59:05 -- app/version.sh@14 -- # cut -f2 00:07:53.844 07:59:05 -- app/version.sh@14 -- # tr -d '"' 00:07:53.844 07:59:05 -- app/version.sh@18 -- # minor=1 00:07:53.844 07:59:05 -- app/version.sh@19 -- # get_header_version patch 00:07:53.844 07:59:05 -- app/version.sh@14 -- # cut -f2 00:07:53.844 07:59:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:53.844 07:59:05 -- app/version.sh@14 -- # tr -d '"' 00:07:53.844 07:59:05 -- app/version.sh@19 -- # patch=1 00:07:53.844 07:59:05 -- app/version.sh@20 -- # get_header_version suffix 00:07:53.844 07:59:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:53.844 07:59:05 -- app/version.sh@14 -- # cut -f2 00:07:53.844 07:59:05 -- app/version.sh@14 -- # tr -d '"' 00:07:53.844 07:59:05 -- app/version.sh@20 -- # suffix=-pre 00:07:53.844 07:59:05 -- app/version.sh@22 -- # version=24.1 00:07:53.844 07:59:05 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:53.844 07:59:05 -- app/version.sh@25 -- # version=24.1.1 00:07:53.844 07:59:05 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:53.844 07:59:05 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:53.844 07:59:05 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:54.102 07:59:05 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:54.102 07:59:05 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:54.102 00:07:54.102 real 0m0.250s 00:07:54.102 user 0m0.161s 00:07:54.102 sys 0m0.128s 00:07:54.102 07:59:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:54.102 07:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:54.102 ************************************ 00:07:54.102 END TEST version 00:07:54.102 ************************************ 00:07:54.102 07:59:05 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:54.102 07:59:05 -- spdk/autotest.sh@191 -- # uname -s 00:07:54.102 07:59:05 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:54.102 07:59:05 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:54.102 07:59:05 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:54.102 07:59:05 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:54.102 07:59:05 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:54.102 07:59:05 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:54.102 07:59:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.102 07:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:54.102 07:59:05 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:54.102 07:59:05 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:54.102 07:59:05 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:07:54.102 07:59:05 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:07:54.102 07:59:05 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:07:54.102 07:59:05 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:07:54.102 07:59:05 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:54.102 07:59:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:54.102 07:59:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.102 07:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:54.102 ************************************ 00:07:54.102 START TEST nvmf_tcp 00:07:54.102 ************************************ 00:07:54.102 07:59:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:54.102 * Looking for test storage... 00:07:54.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:54.102 07:59:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:54.102 07:59:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:54.102 07:59:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:54.361 07:59:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:54.361 07:59:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:54.361 07:59:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:54.361 07:59:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:54.361 07:59:05 -- scripts/common.sh@335 -- # IFS=.-: 00:07:54.361 07:59:05 -- scripts/common.sh@335 -- # read -ra ver1 00:07:54.361 07:59:05 -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.361 07:59:05 -- scripts/common.sh@336 -- # read -ra ver2 00:07:54.361 07:59:05 -- scripts/common.sh@337 -- # local 'op=<' 00:07:54.361 07:59:05 -- scripts/common.sh@339 -- # ver1_l=2 00:07:54.361 07:59:05 -- scripts/common.sh@340 -- # ver2_l=1 00:07:54.361 07:59:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:54.361 07:59:05 -- scripts/common.sh@343 -- # case "$op" in 00:07:54.361 07:59:05 -- scripts/common.sh@344 -- # : 1 00:07:54.361 07:59:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:54.361 07:59:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.361 07:59:05 -- scripts/common.sh@364 -- # decimal 1 00:07:54.361 07:59:05 -- scripts/common.sh@352 -- # local d=1 00:07:54.361 07:59:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.361 07:59:05 -- scripts/common.sh@354 -- # echo 1 00:07:54.361 07:59:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:54.361 07:59:05 -- scripts/common.sh@365 -- # decimal 2 00:07:54.361 07:59:05 -- scripts/common.sh@352 -- # local d=2 00:07:54.361 07:59:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.361 07:59:05 -- scripts/common.sh@354 -- # echo 2 00:07:54.361 07:59:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:54.361 07:59:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:54.361 07:59:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:54.361 07:59:05 -- scripts/common.sh@367 -- # return 0 00:07:54.361 07:59:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.361 07:59:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:54.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.361 --rc genhtml_branch_coverage=1 00:07:54.361 --rc genhtml_function_coverage=1 00:07:54.361 --rc genhtml_legend=1 00:07:54.361 --rc geninfo_all_blocks=1 00:07:54.361 --rc geninfo_unexecuted_blocks=1 00:07:54.361 00:07:54.361 ' 00:07:54.361 07:59:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:54.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.361 --rc genhtml_branch_coverage=1 00:07:54.361 --rc genhtml_function_coverage=1 00:07:54.361 --rc genhtml_legend=1 00:07:54.361 --rc geninfo_all_blocks=1 00:07:54.361 --rc geninfo_unexecuted_blocks=1 00:07:54.361 00:07:54.361 ' 00:07:54.361 07:59:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:54.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.361 --rc genhtml_branch_coverage=1 00:07:54.361 --rc genhtml_function_coverage=1 00:07:54.361 --rc genhtml_legend=1 00:07:54.361 --rc geninfo_all_blocks=1 00:07:54.361 --rc geninfo_unexecuted_blocks=1 00:07:54.361 00:07:54.361 ' 00:07:54.361 07:59:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:54.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.361 --rc genhtml_branch_coverage=1 00:07:54.361 --rc genhtml_function_coverage=1 00:07:54.361 --rc genhtml_legend=1 00:07:54.361 --rc geninfo_all_blocks=1 00:07:54.361 --rc geninfo_unexecuted_blocks=1 00:07:54.361 00:07:54.361 ' 00:07:54.361 07:59:05 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:54.361 07:59:05 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:54.361 07:59:05 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:54.361 07:59:05 -- nvmf/common.sh@7 -- # uname -s 00:07:54.361 07:59:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.361 07:59:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.361 07:59:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.361 07:59:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.361 07:59:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.361 07:59:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.361 07:59:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.361 07:59:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.361 07:59:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.361 07:59:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.361 07:59:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:07:54.361 07:59:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:07:54.361 07:59:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.361 07:59:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.361 07:59:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:54.361 07:59:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.361 07:59:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.361 07:59:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.361 07:59:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.361 07:59:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.361 07:59:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.361 07:59:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.361 07:59:05 -- paths/export.sh@5 -- # export PATH 00:07:54.362 07:59:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.362 07:59:05 -- nvmf/common.sh@46 -- # : 0 00:07:54.362 07:59:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:54.362 07:59:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:54.362 07:59:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:54.362 07:59:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.362 07:59:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.362 07:59:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:54.362 07:59:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:54.362 07:59:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:54.362 07:59:05 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:54.362 07:59:05 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:54.362 07:59:05 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:54.362 07:59:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:54.362 07:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:54.362 07:59:05 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:54.362 07:59:05 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:54.362 07:59:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:54.362 07:59:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.362 07:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:54.362 ************************************ 00:07:54.362 START TEST nvmf_example 00:07:54.362 ************************************ 00:07:54.362 07:59:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:54.362 * Looking for test storage... 00:07:54.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:54.362 07:59:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:54.362 07:59:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:54.362 07:59:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:54.620 07:59:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:54.620 07:59:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:54.620 07:59:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:54.620 07:59:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:54.620 07:59:05 -- scripts/common.sh@335 -- # IFS=.-: 00:07:54.620 07:59:05 -- scripts/common.sh@335 -- # read -ra ver1 00:07:54.620 07:59:05 -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.620 07:59:05 -- scripts/common.sh@336 -- # read -ra ver2 00:07:54.620 07:59:05 -- scripts/common.sh@337 -- # local 'op=<' 00:07:54.620 07:59:05 -- scripts/common.sh@339 -- # ver1_l=2 00:07:54.620 07:59:05 -- scripts/common.sh@340 -- # ver2_l=1 00:07:54.620 07:59:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:54.620 07:59:05 -- scripts/common.sh@343 -- # case "$op" in 00:07:54.620 07:59:05 -- scripts/common.sh@344 -- # : 1 00:07:54.620 07:59:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:54.620 07:59:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.620 07:59:05 -- scripts/common.sh@364 -- # decimal 1 00:07:54.620 07:59:05 -- scripts/common.sh@352 -- # local d=1 00:07:54.620 07:59:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.620 07:59:05 -- scripts/common.sh@354 -- # echo 1 00:07:54.620 07:59:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:54.620 07:59:05 -- scripts/common.sh@365 -- # decimal 2 00:07:54.620 07:59:05 -- scripts/common.sh@352 -- # local d=2 00:07:54.620 07:59:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.620 07:59:05 -- scripts/common.sh@354 -- # echo 2 00:07:54.620 07:59:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:54.621 07:59:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:54.621 07:59:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:54.621 07:59:05 -- scripts/common.sh@367 -- # return 0 00:07:54.621 07:59:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.621 07:59:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:54.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.621 --rc genhtml_branch_coverage=1 00:07:54.621 --rc genhtml_function_coverage=1 00:07:54.621 --rc genhtml_legend=1 00:07:54.621 --rc geninfo_all_blocks=1 00:07:54.621 --rc geninfo_unexecuted_blocks=1 00:07:54.621 00:07:54.621 ' 00:07:54.621 07:59:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:54.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.621 --rc genhtml_branch_coverage=1 00:07:54.621 --rc genhtml_function_coverage=1 00:07:54.621 --rc genhtml_legend=1 00:07:54.621 --rc geninfo_all_blocks=1 00:07:54.621 --rc geninfo_unexecuted_blocks=1 00:07:54.621 00:07:54.621 ' 00:07:54.621 07:59:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:54.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.621 --rc genhtml_branch_coverage=1 00:07:54.621 --rc genhtml_function_coverage=1 00:07:54.621 --rc genhtml_legend=1 00:07:54.621 --rc geninfo_all_blocks=1 00:07:54.621 --rc geninfo_unexecuted_blocks=1 00:07:54.621 00:07:54.621 ' 00:07:54.621 07:59:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:54.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.621 --rc genhtml_branch_coverage=1 00:07:54.621 --rc genhtml_function_coverage=1 00:07:54.621 --rc genhtml_legend=1 00:07:54.621 --rc geninfo_all_blocks=1 00:07:54.621 --rc geninfo_unexecuted_blocks=1 00:07:54.621 00:07:54.621 ' 00:07:54.621 07:59:05 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:54.621 07:59:05 -- nvmf/common.sh@7 -- # uname -s 00:07:54.621 07:59:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.621 07:59:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.621 07:59:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.621 07:59:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.621 07:59:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.621 07:59:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.621 07:59:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.621 07:59:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.621 07:59:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.621 07:59:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.621 07:59:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:07:54.621 07:59:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:07:54.621 07:59:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.621 07:59:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.621 07:59:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:54.621 07:59:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.621 07:59:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.621 07:59:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.621 07:59:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.621 07:59:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.621 07:59:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.621 07:59:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.621 07:59:05 -- paths/export.sh@5 -- # export PATH 00:07:54.621 07:59:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.621 07:59:05 -- nvmf/common.sh@46 -- # : 0 00:07:54.621 07:59:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:54.621 07:59:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:54.621 07:59:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:54.621 07:59:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.621 07:59:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.621 07:59:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:54.621 07:59:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:54.621 07:59:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:54.621 07:59:05 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:54.621 07:59:05 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:54.621 07:59:05 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:54.621 07:59:05 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:54.621 07:59:05 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:54.621 07:59:05 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:54.621 07:59:05 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:54.621 07:59:05 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:54.621 07:59:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:54.621 07:59:05 -- common/autotest_common.sh@10 -- # set +x 00:07:54.621 07:59:05 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:54.621 07:59:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:54.621 07:59:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.621 07:59:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:54.621 07:59:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:54.621 07:59:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:54.621 07:59:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.621 07:59:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.621 07:59:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.621 07:59:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:54.621 07:59:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:54.621 07:59:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:54.621 07:59:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:54.621 07:59:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:54.621 07:59:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:54.621 07:59:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:54.621 07:59:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:54.621 07:59:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:54.621 07:59:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:54.621 07:59:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:54.621 07:59:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:54.621 07:59:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:54.621 07:59:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:54.621 07:59:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:54.621 07:59:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:54.621 07:59:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:54.621 07:59:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:54.621 07:59:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:54.621 Cannot find device "nvmf_init_br" 00:07:54.621 07:59:05 -- nvmf/common.sh@153 -- # true 00:07:54.621 07:59:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:54.621 Cannot find device "nvmf_tgt_br" 00:07:54.621 07:59:05 -- nvmf/common.sh@154 -- # true 00:07:54.621 07:59:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:54.621 Cannot find device "nvmf_tgt_br2" 00:07:54.621 07:59:05 -- nvmf/common.sh@155 -- # true 00:07:54.621 07:59:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:54.621 Cannot find device "nvmf_init_br" 00:07:54.621 07:59:05 -- nvmf/common.sh@156 -- # true 00:07:54.621 07:59:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:54.621 Cannot find device "nvmf_tgt_br" 00:07:54.621 07:59:05 -- nvmf/common.sh@157 -- # true 00:07:54.621 07:59:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:54.621 Cannot find device "nvmf_tgt_br2" 00:07:54.621 07:59:05 -- nvmf/common.sh@158 -- # true 00:07:54.621 07:59:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:54.621 Cannot find device "nvmf_br" 00:07:54.621 07:59:05 -- nvmf/common.sh@159 -- # true 00:07:54.621 07:59:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:54.621 Cannot find device "nvmf_init_if" 00:07:54.621 07:59:05 -- nvmf/common.sh@160 -- # true 00:07:54.621 07:59:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:54.621 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:54.621 07:59:05 -- nvmf/common.sh@161 -- # true 00:07:54.621 07:59:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:54.621 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:54.621 07:59:05 -- nvmf/common.sh@162 -- # true 00:07:54.622 07:59:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:54.622 07:59:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:54.622 07:59:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:54.622 07:59:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:54.622 07:59:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:54.622 07:59:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:54.622 07:59:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:54.880 07:59:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:54.880 07:59:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:54.880 07:59:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:54.880 07:59:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:54.880 07:59:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:54.880 07:59:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:54.880 07:59:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:54.880 07:59:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:54.880 07:59:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:54.880 07:59:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:54.880 07:59:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:54.880 07:59:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:54.880 07:59:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:54.880 07:59:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:54.880 07:59:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:54.880 07:59:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:54.880 07:59:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:54.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:07:54.880 00:07:54.880 --- 10.0.0.2 ping statistics --- 00:07:54.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.880 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:07:54.880 07:59:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:54.880 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:54.880 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:07:54.880 00:07:54.880 --- 10.0.0.3 ping statistics --- 00:07:54.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.880 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:07:54.880 07:59:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:54.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:07:54.880 00:07:54.880 --- 10.0.0.1 ping statistics --- 00:07:54.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.880 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:07:54.880 07:59:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.880 07:59:06 -- nvmf/common.sh@421 -- # return 0 00:07:54.880 07:59:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:54.880 07:59:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.880 07:59:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:54.880 07:59:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:54.880 07:59:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.880 07:59:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:54.880 07:59:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:54.880 07:59:06 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:54.880 07:59:06 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:54.880 07:59:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:54.880 07:59:06 -- common/autotest_common.sh@10 -- # set +x 00:07:54.880 07:59:06 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:54.880 07:59:06 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:54.880 07:59:06 -- target/nvmf_example.sh@34 -- # nvmfpid=72115 00:07:54.880 07:59:06 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:54.880 07:59:06 -- target/nvmf_example.sh@36 -- # waitforlisten 72115 00:07:54.880 07:59:06 -- common/autotest_common.sh@829 -- # '[' -z 72115 ']' 00:07:54.880 07:59:06 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:54.880 07:59:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.880 07:59:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.880 07:59:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.880 07:59:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.880 07:59:06 -- common/autotest_common.sh@10 -- # set +x 00:07:56.255 07:59:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.255 07:59:07 -- common/autotest_common.sh@862 -- # return 0 00:07:56.255 07:59:07 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:56.255 07:59:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:56.255 07:59:07 -- common/autotest_common.sh@10 -- # set +x 00:07:56.255 07:59:07 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:56.255 07:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.255 07:59:07 -- common/autotest_common.sh@10 -- # set +x 00:07:56.255 07:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.255 07:59:07 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:56.255 07:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.255 07:59:07 -- common/autotest_common.sh@10 -- # set +x 00:07:56.255 07:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.255 07:59:07 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:56.255 07:59:07 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:56.255 07:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.255 07:59:07 -- common/autotest_common.sh@10 -- # set +x 00:07:56.255 07:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.255 07:59:07 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:56.255 07:59:07 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:56.255 07:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.255 07:59:07 -- common/autotest_common.sh@10 -- # set +x 00:07:56.255 07:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.255 07:59:07 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:56.255 07:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.255 07:59:07 -- common/autotest_common.sh@10 -- # set +x 00:07:56.255 07:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.255 07:59:07 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:56.255 07:59:07 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:08.449 Initializing NVMe Controllers 00:08:08.449 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:08.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:08.449 Initialization complete. Launching workers. 00:08:08.449 ======================================================== 00:08:08.449 Latency(us) 00:08:08.449 Device Information : IOPS MiB/s Average min max 00:08:08.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16500.55 64.46 3878.27 553.64 20235.21 00:08:08.449 ======================================================== 00:08:08.449 Total : 16500.55 64.46 3878.27 553.64 20235.21 00:08:08.449 00:08:08.449 07:59:17 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:08.449 07:59:17 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:08.449 07:59:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:08.449 07:59:17 -- nvmf/common.sh@116 -- # sync 00:08:08.449 07:59:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:08.449 07:59:17 -- nvmf/common.sh@119 -- # set +e 00:08:08.449 07:59:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:08.449 07:59:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:08.449 rmmod nvme_tcp 00:08:08.449 rmmod nvme_fabrics 00:08:08.449 rmmod nvme_keyring 00:08:08.449 07:59:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:08.449 07:59:17 -- nvmf/common.sh@123 -- # set -e 00:08:08.449 07:59:17 -- nvmf/common.sh@124 -- # return 0 00:08:08.449 07:59:17 -- nvmf/common.sh@477 -- # '[' -n 72115 ']' 00:08:08.449 07:59:17 -- nvmf/common.sh@478 -- # killprocess 72115 00:08:08.449 07:59:17 -- common/autotest_common.sh@936 -- # '[' -z 72115 ']' 00:08:08.449 07:59:17 -- common/autotest_common.sh@940 -- # kill -0 72115 00:08:08.449 07:59:17 -- common/autotest_common.sh@941 -- # uname 00:08:08.449 07:59:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:08.449 07:59:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72115 00:08:08.449 07:59:17 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:08.449 killing process with pid 72115 00:08:08.449 07:59:17 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:08.449 07:59:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72115' 00:08:08.449 07:59:17 -- common/autotest_common.sh@955 -- # kill 72115 00:08:08.449 07:59:17 -- common/autotest_common.sh@960 -- # wait 72115 00:08:08.449 nvmf threads initialize successfully 00:08:08.449 bdev subsystem init successfully 00:08:08.449 created a nvmf target service 00:08:08.449 create targets's poll groups done 00:08:08.449 all subsystems of target started 00:08:08.449 nvmf target is running 00:08:08.449 all subsystems of target stopped 00:08:08.449 destroy targets's poll groups done 00:08:08.449 destroyed the nvmf target service 00:08:08.449 bdev subsystem finish successfully 00:08:08.449 nvmf threads destroy successfully 00:08:08.449 07:59:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:08.449 07:59:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:08.449 07:59:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:08.449 07:59:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.449 07:59:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:08.449 07:59:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.449 07:59:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.449 07:59:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.450 07:59:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:08.450 07:59:17 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:08.450 07:59:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:08.450 07:59:17 -- common/autotest_common.sh@10 -- # set +x 00:08:08.450 00:08:08.450 real 0m12.501s 00:08:08.450 user 0m44.729s 00:08:08.450 sys 0m1.942s 00:08:08.450 07:59:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:08.450 07:59:17 -- common/autotest_common.sh@10 -- # set +x 00:08:08.450 ************************************ 00:08:08.450 END TEST nvmf_example 00:08:08.450 ************************************ 00:08:08.450 07:59:17 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:08.450 07:59:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:08.450 07:59:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.450 07:59:17 -- common/autotest_common.sh@10 -- # set +x 00:08:08.450 ************************************ 00:08:08.450 START TEST nvmf_filesystem 00:08:08.450 ************************************ 00:08:08.450 07:59:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:08.450 * Looking for test storage... 00:08:08.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:08.450 07:59:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:08.450 07:59:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:08.450 07:59:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:08.450 07:59:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:08.450 07:59:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:08.450 07:59:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:08.450 07:59:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:08.450 07:59:18 -- scripts/common.sh@335 -- # IFS=.-: 00:08:08.450 07:59:18 -- scripts/common.sh@335 -- # read -ra ver1 00:08:08.450 07:59:18 -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.450 07:59:18 -- scripts/common.sh@336 -- # read -ra ver2 00:08:08.450 07:59:18 -- scripts/common.sh@337 -- # local 'op=<' 00:08:08.450 07:59:18 -- scripts/common.sh@339 -- # ver1_l=2 00:08:08.450 07:59:18 -- scripts/common.sh@340 -- # ver2_l=1 00:08:08.450 07:59:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:08.450 07:59:18 -- scripts/common.sh@343 -- # case "$op" in 00:08:08.450 07:59:18 -- scripts/common.sh@344 -- # : 1 00:08:08.450 07:59:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:08.450 07:59:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.450 07:59:18 -- scripts/common.sh@364 -- # decimal 1 00:08:08.450 07:59:18 -- scripts/common.sh@352 -- # local d=1 00:08:08.450 07:59:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.450 07:59:18 -- scripts/common.sh@354 -- # echo 1 00:08:08.450 07:59:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:08.450 07:59:18 -- scripts/common.sh@365 -- # decimal 2 00:08:08.450 07:59:18 -- scripts/common.sh@352 -- # local d=2 00:08:08.450 07:59:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.450 07:59:18 -- scripts/common.sh@354 -- # echo 2 00:08:08.450 07:59:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:08.450 07:59:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:08.450 07:59:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:08.450 07:59:18 -- scripts/common.sh@367 -- # return 0 00:08:08.450 07:59:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.450 07:59:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:08.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.450 --rc genhtml_branch_coverage=1 00:08:08.450 --rc genhtml_function_coverage=1 00:08:08.450 --rc genhtml_legend=1 00:08:08.450 --rc geninfo_all_blocks=1 00:08:08.450 --rc geninfo_unexecuted_blocks=1 00:08:08.450 00:08:08.450 ' 00:08:08.450 07:59:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:08.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.450 --rc genhtml_branch_coverage=1 00:08:08.450 --rc genhtml_function_coverage=1 00:08:08.450 --rc genhtml_legend=1 00:08:08.450 --rc geninfo_all_blocks=1 00:08:08.450 --rc geninfo_unexecuted_blocks=1 00:08:08.450 00:08:08.450 ' 00:08:08.450 07:59:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:08.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.450 --rc genhtml_branch_coverage=1 00:08:08.450 --rc genhtml_function_coverage=1 00:08:08.450 --rc genhtml_legend=1 00:08:08.450 --rc geninfo_all_blocks=1 00:08:08.450 --rc geninfo_unexecuted_blocks=1 00:08:08.450 00:08:08.450 ' 00:08:08.450 07:59:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:08.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.450 --rc genhtml_branch_coverage=1 00:08:08.450 --rc genhtml_function_coverage=1 00:08:08.450 --rc genhtml_legend=1 00:08:08.450 --rc geninfo_all_blocks=1 00:08:08.450 --rc geninfo_unexecuted_blocks=1 00:08:08.450 00:08:08.450 ' 00:08:08.450 07:59:18 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:08:08.450 07:59:18 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:08.450 07:59:18 -- common/autotest_common.sh@34 -- # set -e 00:08:08.450 07:59:18 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:08.450 07:59:18 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:08.450 07:59:18 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:08.450 07:59:18 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:08.450 07:59:18 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:08.450 07:59:18 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:08.450 07:59:18 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:08.450 07:59:18 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:08.450 07:59:18 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:08.450 07:59:18 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:08.450 07:59:18 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:08.450 07:59:18 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:08.450 07:59:18 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:08.450 07:59:18 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:08.450 07:59:18 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:08.450 07:59:18 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:08.450 07:59:18 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:08.450 07:59:18 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:08.450 07:59:18 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:08.450 07:59:18 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:08.450 07:59:18 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:08.450 07:59:18 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:08.450 07:59:18 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:08.450 07:59:18 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:08.450 07:59:18 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:08.450 07:59:18 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:08.450 07:59:18 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:08.450 07:59:18 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:08.450 07:59:18 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:08.450 07:59:18 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:08.450 07:59:18 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:08.450 07:59:18 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:08.450 07:59:18 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:08.450 07:59:18 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:08.450 07:59:18 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:08.450 07:59:18 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:08.450 07:59:18 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:08.450 07:59:18 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:08.450 07:59:18 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:08.450 07:59:18 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:08:08.450 07:59:18 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:08.450 07:59:18 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:08.450 07:59:18 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:08.450 07:59:18 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:08.450 07:59:18 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:08:08.450 07:59:18 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:08.450 07:59:18 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:08.450 07:59:18 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:08.450 07:59:18 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:08.450 07:59:18 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:08.450 07:59:18 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:08.450 07:59:18 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:08.450 07:59:18 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:08.450 07:59:18 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:08.450 07:59:18 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:08.450 07:59:18 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:08.450 07:59:18 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:08.450 07:59:18 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:08.450 07:59:18 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:08.450 07:59:18 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:08.450 07:59:18 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:08.450 07:59:18 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:08:08.450 07:59:18 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:08.450 07:59:18 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:08.450 07:59:18 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:08.450 07:59:18 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:08.450 07:59:18 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:08.450 07:59:18 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:08.450 07:59:18 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:08.450 07:59:18 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:08.450 07:59:18 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:08.450 07:59:18 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:08:08.450 07:59:18 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:08.451 07:59:18 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:08.451 07:59:18 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:08.451 07:59:18 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:08.451 07:59:18 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:08.451 07:59:18 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:08.451 07:59:18 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:08.451 07:59:18 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:08.451 07:59:18 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:08.451 07:59:18 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:08.451 07:59:18 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:08.451 07:59:18 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:08.451 07:59:18 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:08.451 07:59:18 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:08:08.451 07:59:18 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:08:08.451 07:59:18 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:08:08.451 07:59:18 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:08:08.451 07:59:18 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:08:08.451 07:59:18 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:08:08.451 07:59:18 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:08.451 07:59:18 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:08.451 07:59:18 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:08.451 07:59:18 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:08.451 07:59:18 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:08.451 07:59:18 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:08.451 07:59:18 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:08:08.451 07:59:18 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:08.451 #define SPDK_CONFIG_H 00:08:08.451 #define SPDK_CONFIG_APPS 1 00:08:08.451 #define SPDK_CONFIG_ARCH native 00:08:08.451 #undef SPDK_CONFIG_ASAN 00:08:08.451 #define SPDK_CONFIG_AVAHI 1 00:08:08.451 #undef SPDK_CONFIG_CET 00:08:08.451 #define SPDK_CONFIG_COVERAGE 1 00:08:08.451 #define SPDK_CONFIG_CROSS_PREFIX 00:08:08.451 #undef SPDK_CONFIG_CRYPTO 00:08:08.451 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:08.451 #undef SPDK_CONFIG_CUSTOMOCF 00:08:08.451 #undef SPDK_CONFIG_DAOS 00:08:08.451 #define SPDK_CONFIG_DAOS_DIR 00:08:08.451 #define SPDK_CONFIG_DEBUG 1 00:08:08.451 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:08.451 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:08:08.451 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:08:08.451 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:08:08.451 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:08.451 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:08.451 #define SPDK_CONFIG_EXAMPLES 1 00:08:08.451 #undef SPDK_CONFIG_FC 00:08:08.451 #define SPDK_CONFIG_FC_PATH 00:08:08.451 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:08.451 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:08.451 #undef SPDK_CONFIG_FUSE 00:08:08.451 #undef SPDK_CONFIG_FUZZER 00:08:08.451 #define SPDK_CONFIG_FUZZER_LIB 00:08:08.451 #define SPDK_CONFIG_GOLANG 1 00:08:08.451 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:08.451 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:08.451 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:08.451 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:08.451 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:08.451 #define SPDK_CONFIG_IDXD 1 00:08:08.451 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:08.451 #undef SPDK_CONFIG_IPSEC_MB 00:08:08.451 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:08.451 #define SPDK_CONFIG_ISAL 1 00:08:08.451 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:08.451 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:08.451 #define SPDK_CONFIG_LIBDIR 00:08:08.451 #undef SPDK_CONFIG_LTO 00:08:08.451 #define SPDK_CONFIG_MAX_LCORES 00:08:08.451 #define SPDK_CONFIG_NVME_CUSE 1 00:08:08.451 #undef SPDK_CONFIG_OCF 00:08:08.451 #define SPDK_CONFIG_OCF_PATH 00:08:08.451 #define SPDK_CONFIG_OPENSSL_PATH 00:08:08.451 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:08.451 #undef SPDK_CONFIG_PGO_USE 00:08:08.451 #define SPDK_CONFIG_PREFIX /usr/local 00:08:08.451 #undef SPDK_CONFIG_RAID5F 00:08:08.451 #undef SPDK_CONFIG_RBD 00:08:08.451 #define SPDK_CONFIG_RDMA 1 00:08:08.451 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:08.451 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:08.451 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:08.451 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:08.451 #define SPDK_CONFIG_SHARED 1 00:08:08.451 #undef SPDK_CONFIG_SMA 00:08:08.451 #define SPDK_CONFIG_TESTS 1 00:08:08.451 #undef SPDK_CONFIG_TSAN 00:08:08.451 #define SPDK_CONFIG_UBLK 1 00:08:08.451 #define SPDK_CONFIG_UBSAN 1 00:08:08.451 #undef SPDK_CONFIG_UNIT_TESTS 00:08:08.451 #undef SPDK_CONFIG_URING 00:08:08.451 #define SPDK_CONFIG_URING_PATH 00:08:08.451 #undef SPDK_CONFIG_URING_ZNS 00:08:08.451 #define SPDK_CONFIG_USDT 1 00:08:08.451 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:08.451 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:08.451 #undef SPDK_CONFIG_VFIO_USER 00:08:08.451 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:08.451 #define SPDK_CONFIG_VHOST 1 00:08:08.451 #define SPDK_CONFIG_VIRTIO 1 00:08:08.451 #undef SPDK_CONFIG_VTUNE 00:08:08.451 #define SPDK_CONFIG_VTUNE_DIR 00:08:08.451 #define SPDK_CONFIG_WERROR 1 00:08:08.451 #define SPDK_CONFIG_WPDK_DIR 00:08:08.451 #undef SPDK_CONFIG_XNVME 00:08:08.451 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:08.451 07:59:18 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:08.451 07:59:18 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:08.451 07:59:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.451 07:59:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.451 07:59:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.451 07:59:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.451 07:59:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.451 07:59:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.451 07:59:18 -- paths/export.sh@5 -- # export PATH 00:08:08.451 07:59:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.451 07:59:18 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:08.451 07:59:18 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:08.451 07:59:18 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:08.451 07:59:18 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:08.451 07:59:18 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:08:08.451 07:59:18 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:08:08.451 07:59:18 -- pm/common@16 -- # TEST_TAG=N/A 00:08:08.451 07:59:18 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:08:08.451 07:59:18 -- common/autotest_common.sh@52 -- # : 1 00:08:08.451 07:59:18 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:08.451 07:59:18 -- common/autotest_common.sh@56 -- # : 0 00:08:08.451 07:59:18 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:08.451 07:59:18 -- common/autotest_common.sh@58 -- # : 0 00:08:08.451 07:59:18 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:08.451 07:59:18 -- common/autotest_common.sh@60 -- # : 1 00:08:08.451 07:59:18 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:08.451 07:59:18 -- common/autotest_common.sh@62 -- # : 0 00:08:08.451 07:59:18 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:08.451 07:59:18 -- common/autotest_common.sh@64 -- # : 00:08:08.451 07:59:18 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:08.451 07:59:18 -- common/autotest_common.sh@66 -- # : 0 00:08:08.451 07:59:18 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:08.451 07:59:18 -- common/autotest_common.sh@68 -- # : 0 00:08:08.451 07:59:18 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:08.451 07:59:18 -- common/autotest_common.sh@70 -- # : 0 00:08:08.451 07:59:18 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:08.451 07:59:18 -- common/autotest_common.sh@72 -- # : 0 00:08:08.451 07:59:18 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:08.451 07:59:18 -- common/autotest_common.sh@74 -- # : 0 00:08:08.451 07:59:18 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:08.451 07:59:18 -- common/autotest_common.sh@76 -- # : 0 00:08:08.451 07:59:18 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:08.451 07:59:18 -- common/autotest_common.sh@78 -- # : 0 00:08:08.451 07:59:18 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:08.451 07:59:18 -- common/autotest_common.sh@80 -- # : 0 00:08:08.451 07:59:18 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:08.451 07:59:18 -- common/autotest_common.sh@82 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:08.452 07:59:18 -- common/autotest_common.sh@84 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:08.452 07:59:18 -- common/autotest_common.sh@86 -- # : 1 00:08:08.452 07:59:18 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:08.452 07:59:18 -- common/autotest_common.sh@88 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:08.452 07:59:18 -- common/autotest_common.sh@90 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:08.452 07:59:18 -- common/autotest_common.sh@92 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:08.452 07:59:18 -- common/autotest_common.sh@94 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:08.452 07:59:18 -- common/autotest_common.sh@96 -- # : tcp 00:08:08.452 07:59:18 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:08.452 07:59:18 -- common/autotest_common.sh@98 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:08.452 07:59:18 -- common/autotest_common.sh@100 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:08.452 07:59:18 -- common/autotest_common.sh@102 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:08.452 07:59:18 -- common/autotest_common.sh@104 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:08.452 07:59:18 -- common/autotest_common.sh@106 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:08.452 07:59:18 -- common/autotest_common.sh@108 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:08.452 07:59:18 -- common/autotest_common.sh@110 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:08.452 07:59:18 -- common/autotest_common.sh@112 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:08.452 07:59:18 -- common/autotest_common.sh@114 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:08.452 07:59:18 -- common/autotest_common.sh@116 -- # : 1 00:08:08.452 07:59:18 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:08.452 07:59:18 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:08:08.452 07:59:18 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:08.452 07:59:18 -- common/autotest_common.sh@120 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:08.452 07:59:18 -- common/autotest_common.sh@122 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:08.452 07:59:18 -- common/autotest_common.sh@124 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:08.452 07:59:18 -- common/autotest_common.sh@126 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:08.452 07:59:18 -- common/autotest_common.sh@128 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:08.452 07:59:18 -- common/autotest_common.sh@130 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:08.452 07:59:18 -- common/autotest_common.sh@132 -- # : v23.11 00:08:08.452 07:59:18 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:08.452 07:59:18 -- common/autotest_common.sh@134 -- # : true 00:08:08.452 07:59:18 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:08.452 07:59:18 -- common/autotest_common.sh@136 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:08.452 07:59:18 -- common/autotest_common.sh@138 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:08.452 07:59:18 -- common/autotest_common.sh@140 -- # : 1 00:08:08.452 07:59:18 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:08.452 07:59:18 -- common/autotest_common.sh@142 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:08.452 07:59:18 -- common/autotest_common.sh@144 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:08.452 07:59:18 -- common/autotest_common.sh@146 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:08.452 07:59:18 -- common/autotest_common.sh@148 -- # : 00:08:08.452 07:59:18 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:08.452 07:59:18 -- common/autotest_common.sh@150 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:08.452 07:59:18 -- common/autotest_common.sh@152 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:08.452 07:59:18 -- common/autotest_common.sh@154 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:08.452 07:59:18 -- common/autotest_common.sh@156 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:08.452 07:59:18 -- common/autotest_common.sh@158 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:08.452 07:59:18 -- common/autotest_common.sh@160 -- # : 0 00:08:08.452 07:59:18 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:08.452 07:59:18 -- common/autotest_common.sh@163 -- # : 00:08:08.452 07:59:18 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:08.452 07:59:18 -- common/autotest_common.sh@165 -- # : 1 00:08:08.452 07:59:18 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:08.452 07:59:18 -- common/autotest_common.sh@167 -- # : 1 00:08:08.452 07:59:18 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:08.452 07:59:18 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:08.452 07:59:18 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:08.452 07:59:18 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:08.452 07:59:18 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:08.452 07:59:18 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:08.452 07:59:18 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:08.452 07:59:18 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:08.452 07:59:18 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:08.452 07:59:18 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:08.452 07:59:18 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:08.452 07:59:18 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:08.452 07:59:18 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:08.452 07:59:18 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:08.452 07:59:18 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:08.452 07:59:18 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:08.452 07:59:18 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:08.452 07:59:18 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:08.452 07:59:18 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:08.452 07:59:18 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:08.452 07:59:18 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:08.452 07:59:18 -- common/autotest_common.sh@196 -- # cat 00:08:08.452 07:59:18 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:08.452 07:59:18 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:08.452 07:59:18 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:08.452 07:59:18 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:08.452 07:59:18 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:08.452 07:59:18 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:08.452 07:59:18 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:08.452 07:59:18 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:08.452 07:59:18 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:08.452 07:59:18 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:08.452 07:59:18 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:08.452 07:59:18 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:08.452 07:59:18 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:08.452 07:59:18 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:08.452 07:59:18 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:08.452 07:59:18 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:08.452 07:59:18 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:08.453 07:59:18 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:08.453 07:59:18 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:08.453 07:59:18 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:08:08.453 07:59:18 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:08:08.453 07:59:18 -- common/autotest_common.sh@249 -- # _LCOV= 00:08:08.453 07:59:18 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:08:08.453 07:59:18 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:08:08.453 07:59:18 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:08.453 07:59:18 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:08:08.453 07:59:18 -- common/autotest_common.sh@255 -- # lcov_opt= 00:08:08.453 07:59:18 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:08:08.453 07:59:18 -- common/autotest_common.sh@259 -- # export valgrind= 00:08:08.453 07:59:18 -- common/autotest_common.sh@259 -- # valgrind= 00:08:08.453 07:59:18 -- common/autotest_common.sh@265 -- # uname -s 00:08:08.453 07:59:18 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:08:08.453 07:59:18 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:08:08.453 07:59:18 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:08:08.453 07:59:18 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:08:08.453 07:59:18 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:08.453 07:59:18 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:08.453 07:59:18 -- common/autotest_common.sh@275 -- # MAKE=make 00:08:08.453 07:59:18 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:08:08.453 07:59:18 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:08:08.453 07:59:18 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:08:08.453 07:59:18 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:08:08.453 07:59:18 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:08:08.453 07:59:18 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:08:08.453 07:59:18 -- common/autotest_common.sh@301 -- # for i in "$@" 00:08:08.453 07:59:18 -- common/autotest_common.sh@302 -- # case "$i" in 00:08:08.453 07:59:18 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp 00:08:08.453 07:59:18 -- common/autotest_common.sh@319 -- # [[ -z 72362 ]] 00:08:08.453 07:59:18 -- common/autotest_common.sh@319 -- # kill -0 72362 00:08:08.453 07:59:18 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:08:08.453 07:59:18 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:08:08.453 07:59:18 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:08:08.453 07:59:18 -- common/autotest_common.sh@332 -- # local mount target_dir 00:08:08.453 07:59:18 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:08:08.453 07:59:18 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:08:08.453 07:59:18 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:08:08.453 07:59:18 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:08:08.453 07:59:18 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.pYdTB1 00:08:08.453 07:59:18 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:08.453 07:59:18 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:08:08.453 07:59:18 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:08:08.453 07:59:18 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.pYdTB1/tests/target /tmp/spdk.pYdTB1 00:08:08.453 07:59:18 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:08:08.453 07:59:18 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:08.453 07:59:18 -- common/autotest_common.sh@328 -- # df -T 00:08:08.453 07:59:18 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:08:08.453 07:59:18 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:08:08.453 07:59:18 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:08:08.453 07:59:18 -- common/autotest_common.sh@363 -- # avails["$mount"]=13293805568 00:08:08.453 07:59:18 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:08:08.453 07:59:18 -- common/autotest_common.sh@364 -- # uses["$mount"]=6289514496 00:08:08.453 07:59:18 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:08.453 07:59:18 -- common/autotest_common.sh@362 -- # mounts["$mount"]=devtmpfs 00:08:08.453 07:59:18 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:08:08.453 07:59:18 -- common/autotest_common.sh@363 -- # avails["$mount"]=4194304 00:08:08.453 07:59:18 -- common/autotest_common.sh@363 -- # sizes["$mount"]=4194304 00:08:08.453 07:59:18 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:08:08.453 07:59:18 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:08.453 07:59:18 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:08.453 07:59:18 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:08.453 07:59:18 -- common/autotest_common.sh@363 -- # avails["$mount"]=6265167872 00:08:08.453 07:59:18 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266425344 00:08:08.453 07:59:18 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:08:08.453 07:59:18 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:08.453 07:59:18 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:08.453 07:59:18 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:08.453 07:59:18 -- common/autotest_common.sh@363 -- # avails["$mount"]=2493755392 00:08:08.453 07:59:18 -- common/autotest_common.sh@363 -- # sizes["$mount"]=2506571776 00:08:08.453 07:59:18 -- common/autotest_common.sh@364 -- # uses["$mount"]=12816384 00:08:08.453 07:59:18 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:08.453 07:59:18 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:08:08.453 07:59:18 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:08:08.453 07:59:18 -- common/autotest_common.sh@363 -- # avails["$mount"]=13293805568 00:08:08.453 07:59:18 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:08:08.453 07:59:18 -- common/autotest_common.sh@364 -- # uses["$mount"]=6289514496 00:08:08.453 07:59:18 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:08.453 07:59:18 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda2 00:08:08.453 07:59:18 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:08:08.453 07:59:18 -- common/autotest_common.sh@363 -- # avails["$mount"]=840085504 00:08:08.453 07:59:18 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1012768768 00:08:08.453 07:59:18 -- common/autotest_common.sh@364 -- # uses["$mount"]=103477248 00:08:08.453 07:59:18 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:08.453 07:59:18 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:08.453 07:59:18 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:08.453 07:59:18 -- common/autotest_common.sh@363 -- # avails["$mount"]=6266286080 00:08:08.453 07:59:18 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266425344 00:08:08.453 07:59:18 -- common/autotest_common.sh@364 -- # uses["$mount"]=139264 00:08:08.453 07:59:18 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:08.453 07:59:18 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda3 00:08:08.453 07:59:18 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:08:08.453 07:59:18 -- common/autotest_common.sh@363 -- # avails["$mount"]=91617280 00:08:08.453 07:59:18 -- common/autotest_common.sh@363 -- # sizes["$mount"]=104607744 00:08:08.453 07:59:18 -- common/autotest_common.sh@364 -- # uses["$mount"]=12990464 00:08:08.453 07:59:18 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:08.453 07:59:18 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:08.453 07:59:18 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:08.453 07:59:18 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253269504 00:08:08.453 07:59:18 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253281792 00:08:08.453 07:59:18 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:08:08.453 07:59:18 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:08.453 07:59:18 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:08:08.453 07:59:18 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:08:08.453 07:59:18 -- common/autotest_common.sh@363 -- # avails["$mount"]=97241554944 00:08:08.453 07:59:18 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:08:08.453 07:59:18 -- common/autotest_common.sh@364 -- # uses["$mount"]=2461224960 00:08:08.453 07:59:18 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:08.453 07:59:18 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:08:08.453 * Looking for test storage... 00:08:08.453 07:59:18 -- common/autotest_common.sh@369 -- # local target_space new_size 00:08:08.453 07:59:18 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:08:08.453 07:59:18 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:08.453 07:59:18 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:08.453 07:59:18 -- common/autotest_common.sh@373 -- # mount=/home 00:08:08.453 07:59:18 -- common/autotest_common.sh@375 -- # target_space=13293805568 00:08:08.453 07:59:18 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:08:08.453 07:59:18 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:08:08.453 07:59:18 -- common/autotest_common.sh@381 -- # [[ btrfs == tmpfs ]] 00:08:08.453 07:59:18 -- common/autotest_common.sh@381 -- # [[ btrfs == ramfs ]] 00:08:08.453 07:59:18 -- common/autotest_common.sh@381 -- # [[ /home == / ]] 00:08:08.453 07:59:18 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:08.453 07:59:18 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:08.453 07:59:18 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:08.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:08.453 07:59:18 -- common/autotest_common.sh@390 -- # return 0 00:08:08.453 07:59:18 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:08:08.453 07:59:18 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:08:08.453 07:59:18 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:08.453 07:59:18 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:08.453 07:59:18 -- common/autotest_common.sh@1682 -- # true 00:08:08.453 07:59:18 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:08:08.453 07:59:18 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:08.453 07:59:18 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:08.453 07:59:18 -- common/autotest_common.sh@27 -- # exec 00:08:08.453 07:59:18 -- common/autotest_common.sh@29 -- # exec 00:08:08.453 07:59:18 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:08.453 07:59:18 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:08.454 07:59:18 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:08.454 07:59:18 -- common/autotest_common.sh@18 -- # set -x 00:08:08.454 07:59:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:08.454 07:59:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:08.454 07:59:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:08.454 07:59:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:08.454 07:59:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:08.454 07:59:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:08.454 07:59:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:08.454 07:59:18 -- scripts/common.sh@335 -- # IFS=.-: 00:08:08.454 07:59:18 -- scripts/common.sh@335 -- # read -ra ver1 00:08:08.454 07:59:18 -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.454 07:59:18 -- scripts/common.sh@336 -- # read -ra ver2 00:08:08.454 07:59:18 -- scripts/common.sh@337 -- # local 'op=<' 00:08:08.454 07:59:18 -- scripts/common.sh@339 -- # ver1_l=2 00:08:08.454 07:59:18 -- scripts/common.sh@340 -- # ver2_l=1 00:08:08.454 07:59:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:08.454 07:59:18 -- scripts/common.sh@343 -- # case "$op" in 00:08:08.454 07:59:18 -- scripts/common.sh@344 -- # : 1 00:08:08.454 07:59:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:08.454 07:59:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.454 07:59:18 -- scripts/common.sh@364 -- # decimal 1 00:08:08.454 07:59:18 -- scripts/common.sh@352 -- # local d=1 00:08:08.454 07:59:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.454 07:59:18 -- scripts/common.sh@354 -- # echo 1 00:08:08.454 07:59:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:08.454 07:59:18 -- scripts/common.sh@365 -- # decimal 2 00:08:08.454 07:59:18 -- scripts/common.sh@352 -- # local d=2 00:08:08.454 07:59:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.454 07:59:18 -- scripts/common.sh@354 -- # echo 2 00:08:08.454 07:59:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:08.454 07:59:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:08.454 07:59:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:08.454 07:59:18 -- scripts/common.sh@367 -- # return 0 00:08:08.454 07:59:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.454 07:59:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:08.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.454 --rc genhtml_branch_coverage=1 00:08:08.454 --rc genhtml_function_coverage=1 00:08:08.454 --rc genhtml_legend=1 00:08:08.454 --rc geninfo_all_blocks=1 00:08:08.454 --rc geninfo_unexecuted_blocks=1 00:08:08.454 00:08:08.454 ' 00:08:08.454 07:59:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:08.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.454 --rc genhtml_branch_coverage=1 00:08:08.454 --rc genhtml_function_coverage=1 00:08:08.454 --rc genhtml_legend=1 00:08:08.454 --rc geninfo_all_blocks=1 00:08:08.454 --rc geninfo_unexecuted_blocks=1 00:08:08.454 00:08:08.454 ' 00:08:08.454 07:59:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:08.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.454 --rc genhtml_branch_coverage=1 00:08:08.454 --rc genhtml_function_coverage=1 00:08:08.454 --rc genhtml_legend=1 00:08:08.454 --rc geninfo_all_blocks=1 00:08:08.454 --rc geninfo_unexecuted_blocks=1 00:08:08.454 00:08:08.454 ' 00:08:08.454 07:59:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:08.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.454 --rc genhtml_branch_coverage=1 00:08:08.454 --rc genhtml_function_coverage=1 00:08:08.454 --rc genhtml_legend=1 00:08:08.454 --rc geninfo_all_blocks=1 00:08:08.454 --rc geninfo_unexecuted_blocks=1 00:08:08.454 00:08:08.454 ' 00:08:08.454 07:59:18 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:08.454 07:59:18 -- nvmf/common.sh@7 -- # uname -s 00:08:08.454 07:59:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.454 07:59:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.454 07:59:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.454 07:59:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.454 07:59:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.454 07:59:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.454 07:59:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.454 07:59:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.454 07:59:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.454 07:59:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.454 07:59:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:08:08.454 07:59:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:08:08.454 07:59:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.454 07:59:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.454 07:59:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:08.454 07:59:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:08.454 07:59:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.454 07:59:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.454 07:59:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.454 07:59:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.454 07:59:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.454 07:59:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.454 07:59:18 -- paths/export.sh@5 -- # export PATH 00:08:08.454 07:59:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.454 07:59:18 -- nvmf/common.sh@46 -- # : 0 00:08:08.454 07:59:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:08.454 07:59:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:08.454 07:59:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:08.454 07:59:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.454 07:59:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.454 07:59:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:08.454 07:59:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:08.454 07:59:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:08.454 07:59:18 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:08.454 07:59:18 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:08.454 07:59:18 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:08.454 07:59:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:08.454 07:59:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.454 07:59:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:08.454 07:59:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:08.454 07:59:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:08.454 07:59:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.454 07:59:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.454 07:59:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.454 07:59:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:08.455 07:59:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:08.455 07:59:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:08.455 07:59:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:08.455 07:59:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:08.455 07:59:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:08.455 07:59:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.455 07:59:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.455 07:59:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:08.455 07:59:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:08.455 07:59:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:08.455 07:59:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:08.455 07:59:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:08.455 07:59:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.455 07:59:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:08.455 07:59:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:08.455 07:59:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:08.455 07:59:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:08.455 07:59:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:08.455 07:59:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:08.455 Cannot find device "nvmf_tgt_br" 00:08:08.455 07:59:18 -- nvmf/common.sh@154 -- # true 00:08:08.455 07:59:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:08.455 Cannot find device "nvmf_tgt_br2" 00:08:08.455 07:59:18 -- nvmf/common.sh@155 -- # true 00:08:08.455 07:59:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:08.455 07:59:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:08.455 Cannot find device "nvmf_tgt_br" 00:08:08.455 07:59:18 -- nvmf/common.sh@157 -- # true 00:08:08.455 07:59:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:08.455 Cannot find device "nvmf_tgt_br2" 00:08:08.455 07:59:18 -- nvmf/common.sh@158 -- # true 00:08:08.455 07:59:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:08.455 07:59:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:08.455 07:59:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:08.455 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:08.455 07:59:18 -- nvmf/common.sh@161 -- # true 00:08:08.455 07:59:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:08.455 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:08.455 07:59:18 -- nvmf/common.sh@162 -- # true 00:08:08.455 07:59:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:08.455 07:59:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:08.455 07:59:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:08.455 07:59:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:08.455 07:59:18 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:08.455 07:59:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:08.455 07:59:18 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:08.455 07:59:18 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:08.455 07:59:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:08.455 07:59:18 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:08.455 07:59:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:08.455 07:59:18 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:08.455 07:59:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:08.455 07:59:18 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:08.455 07:59:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:08.455 07:59:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:08.455 07:59:18 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:08.455 07:59:18 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:08.455 07:59:18 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:08.455 07:59:18 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:08.455 07:59:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:08.455 07:59:18 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:08.455 07:59:18 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:08.455 07:59:18 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:08.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:08:08.455 00:08:08.455 --- 10.0.0.2 ping statistics --- 00:08:08.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.455 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:08.455 07:59:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:08.455 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:08.455 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:08:08.455 00:08:08.455 --- 10.0.0.3 ping statistics --- 00:08:08.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.455 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:08.455 07:59:18 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:08.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:08.455 00:08:08.455 --- 10.0.0.1 ping statistics --- 00:08:08.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.455 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:08.455 07:59:18 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.455 07:59:18 -- nvmf/common.sh@421 -- # return 0 00:08:08.455 07:59:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:08.455 07:59:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.455 07:59:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:08.455 07:59:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:08.455 07:59:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.455 07:59:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:08.455 07:59:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:08.455 07:59:18 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:08.455 07:59:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:08.455 07:59:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.455 07:59:18 -- common/autotest_common.sh@10 -- # set +x 00:08:08.455 ************************************ 00:08:08.455 START TEST nvmf_filesystem_no_in_capsule 00:08:08.455 ************************************ 00:08:08.455 07:59:18 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:08:08.455 07:59:18 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:08.455 07:59:18 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:08.455 07:59:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:08.455 07:59:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:08.455 07:59:18 -- common/autotest_common.sh@10 -- # set +x 00:08:08.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.455 07:59:18 -- nvmf/common.sh@469 -- # nvmfpid=72541 00:08:08.455 07:59:18 -- nvmf/common.sh@470 -- # waitforlisten 72541 00:08:08.455 07:59:18 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:08.455 07:59:18 -- common/autotest_common.sh@829 -- # '[' -z 72541 ']' 00:08:08.455 07:59:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.455 07:59:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.455 07:59:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.455 07:59:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.455 07:59:18 -- common/autotest_common.sh@10 -- # set +x 00:08:08.455 [2024-12-07 07:59:18.824606] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:08.455 [2024-12-07 07:59:18.824708] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.455 [2024-12-07 07:59:18.965453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:08.455 [2024-12-07 07:59:19.046054] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:08.455 [2024-12-07 07:59:19.046408] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.455 [2024-12-07 07:59:19.046516] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.455 [2024-12-07 07:59:19.046622] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.455 [2024-12-07 07:59:19.046854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.455 [2024-12-07 07:59:19.048346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.455 [2024-12-07 07:59:19.048274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.455 [2024-12-07 07:59:19.048339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:08.713 07:59:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:08.713 07:59:19 -- common/autotest_common.sh@862 -- # return 0 00:08:08.713 07:59:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:08.713 07:59:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:08.713 07:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:08.713 07:59:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.713 07:59:19 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:08.713 07:59:19 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:08.713 07:59:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.713 07:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:08.713 [2024-12-07 07:59:19.900363] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.713 07:59:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.713 07:59:19 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:08.713 07:59:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.713 07:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:08.972 Malloc1 00:08:08.972 07:59:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.972 07:59:20 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:08.972 07:59:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.972 07:59:20 -- common/autotest_common.sh@10 -- # set +x 00:08:08.972 07:59:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.972 07:59:20 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:08.972 07:59:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.972 07:59:20 -- common/autotest_common.sh@10 -- # set +x 00:08:08.972 07:59:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.972 07:59:20 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.972 07:59:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.972 07:59:20 -- common/autotest_common.sh@10 -- # set +x 00:08:08.972 [2024-12-07 07:59:20.099328] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.972 07:59:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.972 07:59:20 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:08.972 07:59:20 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:08.972 07:59:20 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:08.972 07:59:20 -- common/autotest_common.sh@1369 -- # local bs 00:08:08.972 07:59:20 -- common/autotest_common.sh@1370 -- # local nb 00:08:08.972 07:59:20 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:08.972 07:59:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.972 07:59:20 -- common/autotest_common.sh@10 -- # set +x 00:08:08.972 07:59:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.972 07:59:20 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:08.972 { 00:08:08.972 "aliases": [ 00:08:08.972 "9e9ff986-83c3-4f0d-83c2-b794a7ad2d4c" 00:08:08.972 ], 00:08:08.972 "assigned_rate_limits": { 00:08:08.972 "r_mbytes_per_sec": 0, 00:08:08.972 "rw_ios_per_sec": 0, 00:08:08.972 "rw_mbytes_per_sec": 0, 00:08:08.972 "w_mbytes_per_sec": 0 00:08:08.972 }, 00:08:08.972 "block_size": 512, 00:08:08.972 "claim_type": "exclusive_write", 00:08:08.972 "claimed": true, 00:08:08.972 "driver_specific": {}, 00:08:08.972 "memory_domains": [ 00:08:08.972 { 00:08:08.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.972 "dma_device_type": 2 00:08:08.972 } 00:08:08.972 ], 00:08:08.972 "name": "Malloc1", 00:08:08.972 "num_blocks": 1048576, 00:08:08.972 "product_name": "Malloc disk", 00:08:08.972 "supported_io_types": { 00:08:08.972 "abort": true, 00:08:08.972 "compare": false, 00:08:08.972 "compare_and_write": false, 00:08:08.972 "flush": true, 00:08:08.972 "nvme_admin": false, 00:08:08.972 "nvme_io": false, 00:08:08.972 "read": true, 00:08:08.972 "reset": true, 00:08:08.972 "unmap": true, 00:08:08.972 "write": true, 00:08:08.972 "write_zeroes": true 00:08:08.972 }, 00:08:08.972 "uuid": "9e9ff986-83c3-4f0d-83c2-b794a7ad2d4c", 00:08:08.972 "zoned": false 00:08:08.972 } 00:08:08.972 ]' 00:08:08.972 07:59:20 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:08.972 07:59:20 -- common/autotest_common.sh@1372 -- # bs=512 00:08:08.972 07:59:20 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:08.972 07:59:20 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:08.972 07:59:20 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:08.972 07:59:20 -- common/autotest_common.sh@1377 -- # echo 512 00:08:08.972 07:59:20 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:08.972 07:59:20 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:09.263 07:59:20 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:09.263 07:59:20 -- common/autotest_common.sh@1187 -- # local i=0 00:08:09.263 07:59:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:09.263 07:59:20 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:09.263 07:59:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:11.164 07:59:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:11.164 07:59:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:11.164 07:59:22 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:11.164 07:59:22 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:11.164 07:59:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:11.164 07:59:22 -- common/autotest_common.sh@1197 -- # return 0 00:08:11.164 07:59:22 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:11.164 07:59:22 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:11.422 07:59:22 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:11.422 07:59:22 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:11.422 07:59:22 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:11.422 07:59:22 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:11.422 07:59:22 -- setup/common.sh@80 -- # echo 536870912 00:08:11.422 07:59:22 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:11.422 07:59:22 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:11.422 07:59:22 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:11.422 07:59:22 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:11.422 07:59:22 -- target/filesystem.sh@69 -- # partprobe 00:08:11.422 07:59:22 -- target/filesystem.sh@70 -- # sleep 1 00:08:12.365 07:59:23 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:12.365 07:59:23 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:12.365 07:59:23 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:12.365 07:59:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.365 07:59:23 -- common/autotest_common.sh@10 -- # set +x 00:08:12.365 ************************************ 00:08:12.365 START TEST filesystem_ext4 00:08:12.365 ************************************ 00:08:12.365 07:59:23 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:12.365 07:59:23 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:12.365 07:59:23 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:12.365 07:59:23 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:12.365 07:59:23 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:12.365 07:59:23 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:12.365 07:59:23 -- common/autotest_common.sh@914 -- # local i=0 00:08:12.365 07:59:23 -- common/autotest_common.sh@915 -- # local force 00:08:12.365 07:59:23 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:12.365 07:59:23 -- common/autotest_common.sh@918 -- # force=-F 00:08:12.365 07:59:23 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:12.365 mke2fs 1.47.0 (5-Feb-2023) 00:08:12.623 Discarding device blocks: 0/522240 done 00:08:12.623 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:12.623 Filesystem UUID: 4e632a0e-df14-4ef9-a44d-c6f7873f70f6 00:08:12.623 Superblock backups stored on blocks: 00:08:12.623 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:12.623 00:08:12.623 Allocating group tables: 0/64 done 00:08:12.623 Writing inode tables: 0/64 done 00:08:12.623 Creating journal (8192 blocks): done 00:08:12.623 Writing superblocks and filesystem accounting information: 0/64 done 00:08:12.623 00:08:12.623 07:59:23 -- common/autotest_common.sh@931 -- # return 0 00:08:12.623 07:59:23 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:17.886 07:59:29 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:17.886 07:59:29 -- target/filesystem.sh@25 -- # sync 00:08:18.145 07:59:29 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:18.145 07:59:29 -- target/filesystem.sh@27 -- # sync 00:08:18.145 07:59:29 -- target/filesystem.sh@29 -- # i=0 00:08:18.145 07:59:29 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:18.145 07:59:29 -- target/filesystem.sh@37 -- # kill -0 72541 00:08:18.145 07:59:29 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:18.145 07:59:29 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:18.145 07:59:29 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:18.145 07:59:29 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:18.145 ************************************ 00:08:18.145 END TEST filesystem_ext4 00:08:18.145 ************************************ 00:08:18.145 00:08:18.145 real 0m5.600s 00:08:18.145 user 0m0.017s 00:08:18.145 sys 0m0.075s 00:08:18.145 07:59:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.145 07:59:29 -- common/autotest_common.sh@10 -- # set +x 00:08:18.145 07:59:29 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:18.145 07:59:29 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:18.145 07:59:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.145 07:59:29 -- common/autotest_common.sh@10 -- # set +x 00:08:18.145 ************************************ 00:08:18.145 START TEST filesystem_btrfs 00:08:18.145 ************************************ 00:08:18.145 07:59:29 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:18.145 07:59:29 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:18.145 07:59:29 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:18.145 07:59:29 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:18.145 07:59:29 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:18.145 07:59:29 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:18.145 07:59:29 -- common/autotest_common.sh@914 -- # local i=0 00:08:18.145 07:59:29 -- common/autotest_common.sh@915 -- # local force 00:08:18.145 07:59:29 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:18.145 07:59:29 -- common/autotest_common.sh@920 -- # force=-f 00:08:18.145 07:59:29 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:18.145 btrfs-progs v6.8.1 00:08:18.145 See https://btrfs.readthedocs.io for more information. 00:08:18.145 00:08:18.145 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:18.145 NOTE: several default settings have changed in version 5.15, please make sure 00:08:18.145 this does not affect your deployments: 00:08:18.145 - DUP for metadata (-m dup) 00:08:18.145 - enabled no-holes (-O no-holes) 00:08:18.145 - enabled free-space-tree (-R free-space-tree) 00:08:18.145 00:08:18.145 Label: (null) 00:08:18.145 UUID: f2900df2-d51e-4cf1-b8f9-e86683ec1873 00:08:18.145 Node size: 16384 00:08:18.145 Sector size: 4096 (CPU page size: 4096) 00:08:18.145 Filesystem size: 510.00MiB 00:08:18.145 Block group profiles: 00:08:18.145 Data: single 8.00MiB 00:08:18.145 Metadata: DUP 32.00MiB 00:08:18.145 System: DUP 8.00MiB 00:08:18.145 SSD detected: yes 00:08:18.145 Zoned device: no 00:08:18.145 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:18.145 Checksum: crc32c 00:08:18.145 Number of devices: 1 00:08:18.145 Devices: 00:08:18.145 ID SIZE PATH 00:08:18.145 1 510.00MiB /dev/nvme0n1p1 00:08:18.145 00:08:18.145 07:59:29 -- common/autotest_common.sh@931 -- # return 0 00:08:18.146 07:59:29 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:18.146 07:59:29 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:18.146 07:59:29 -- target/filesystem.sh@25 -- # sync 00:08:18.405 07:59:29 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:18.405 07:59:29 -- target/filesystem.sh@27 -- # sync 00:08:18.405 07:59:29 -- target/filesystem.sh@29 -- # i=0 00:08:18.405 07:59:29 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:18.405 07:59:29 -- target/filesystem.sh@37 -- # kill -0 72541 00:08:18.405 07:59:29 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:18.405 07:59:29 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:18.405 07:59:29 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:18.405 07:59:29 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:18.405 ************************************ 00:08:18.405 END TEST filesystem_btrfs 00:08:18.405 ************************************ 00:08:18.405 00:08:18.405 real 0m0.218s 00:08:18.405 user 0m0.024s 00:08:18.405 sys 0m0.055s 00:08:18.405 07:59:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.405 07:59:29 -- common/autotest_common.sh@10 -- # set +x 00:08:18.405 07:59:29 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:18.405 07:59:29 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:18.405 07:59:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.405 07:59:29 -- common/autotest_common.sh@10 -- # set +x 00:08:18.405 ************************************ 00:08:18.405 START TEST filesystem_xfs 00:08:18.405 ************************************ 00:08:18.405 07:59:29 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:18.405 07:59:29 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:18.405 07:59:29 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:18.405 07:59:29 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:18.405 07:59:29 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:18.405 07:59:29 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:18.405 07:59:29 -- common/autotest_common.sh@914 -- # local i=0 00:08:18.405 07:59:29 -- common/autotest_common.sh@915 -- # local force 00:08:18.405 07:59:29 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:18.405 07:59:29 -- common/autotest_common.sh@920 -- # force=-f 00:08:18.405 07:59:29 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:18.405 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:18.405 = sectsz=512 attr=2, projid32bit=1 00:08:18.405 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:18.405 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:18.405 data = bsize=4096 blocks=130560, imaxpct=25 00:08:18.405 = sunit=0 swidth=0 blks 00:08:18.405 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:18.405 log =internal log bsize=4096 blocks=16384, version=2 00:08:18.405 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:18.405 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:19.343 Discarding blocks...Done. 00:08:19.343 07:59:30 -- common/autotest_common.sh@931 -- # return 0 00:08:19.343 07:59:30 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:21.873 07:59:32 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:21.873 07:59:32 -- target/filesystem.sh@25 -- # sync 00:08:21.873 07:59:32 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:21.873 07:59:32 -- target/filesystem.sh@27 -- # sync 00:08:21.873 07:59:32 -- target/filesystem.sh@29 -- # i=0 00:08:21.873 07:59:32 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:21.873 07:59:32 -- target/filesystem.sh@37 -- # kill -0 72541 00:08:21.873 07:59:32 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:21.873 07:59:32 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:21.873 07:59:32 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:21.873 07:59:32 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:21.873 ************************************ 00:08:21.873 END TEST filesystem_xfs 00:08:21.873 ************************************ 00:08:21.873 00:08:21.873 real 0m3.159s 00:08:21.873 user 0m0.024s 00:08:21.873 sys 0m0.057s 00:08:21.873 07:59:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:21.873 07:59:32 -- common/autotest_common.sh@10 -- # set +x 00:08:21.873 07:59:32 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:21.873 07:59:32 -- target/filesystem.sh@93 -- # sync 00:08:21.873 07:59:32 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:21.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.873 07:59:32 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:21.873 07:59:32 -- common/autotest_common.sh@1208 -- # local i=0 00:08:21.873 07:59:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:21.873 07:59:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.873 07:59:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:21.873 07:59:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.873 07:59:32 -- common/autotest_common.sh@1220 -- # return 0 00:08:21.873 07:59:32 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:21.873 07:59:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.873 07:59:32 -- common/autotest_common.sh@10 -- # set +x 00:08:21.873 07:59:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.873 07:59:32 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:21.873 07:59:32 -- target/filesystem.sh@101 -- # killprocess 72541 00:08:21.873 07:59:32 -- common/autotest_common.sh@936 -- # '[' -z 72541 ']' 00:08:21.873 07:59:32 -- common/autotest_common.sh@940 -- # kill -0 72541 00:08:21.873 07:59:32 -- common/autotest_common.sh@941 -- # uname 00:08:21.873 07:59:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:21.873 07:59:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72541 00:08:21.873 killing process with pid 72541 00:08:21.873 07:59:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:21.873 07:59:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:21.873 07:59:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72541' 00:08:21.873 07:59:32 -- common/autotest_common.sh@955 -- # kill 72541 00:08:21.873 07:59:32 -- common/autotest_common.sh@960 -- # wait 72541 00:08:22.131 ************************************ 00:08:22.131 END TEST nvmf_filesystem_no_in_capsule 00:08:22.131 ************************************ 00:08:22.131 07:59:33 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:22.131 00:08:22.131 real 0m14.488s 00:08:22.131 user 0m55.473s 00:08:22.131 sys 0m2.076s 00:08:22.131 07:59:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:22.131 07:59:33 -- common/autotest_common.sh@10 -- # set +x 00:08:22.131 07:59:33 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:22.131 07:59:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:22.131 07:59:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.131 07:59:33 -- common/autotest_common.sh@10 -- # set +x 00:08:22.131 ************************************ 00:08:22.131 START TEST nvmf_filesystem_in_capsule 00:08:22.131 ************************************ 00:08:22.131 07:59:33 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:08:22.131 07:59:33 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:22.131 07:59:33 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:22.131 07:59:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:22.132 07:59:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:22.132 07:59:33 -- common/autotest_common.sh@10 -- # set +x 00:08:22.132 07:59:33 -- nvmf/common.sh@469 -- # nvmfpid=72913 00:08:22.132 07:59:33 -- nvmf/common.sh@470 -- # waitforlisten 72913 00:08:22.132 07:59:33 -- common/autotest_common.sh@829 -- # '[' -z 72913 ']' 00:08:22.132 07:59:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:22.132 07:59:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.132 07:59:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:22.132 07:59:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.132 07:59:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:22.132 07:59:33 -- common/autotest_common.sh@10 -- # set +x 00:08:22.132 [2024-12-07 07:59:33.366422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:22.132 [2024-12-07 07:59:33.366519] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.390 [2024-12-07 07:59:33.506078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.390 [2024-12-07 07:59:33.584436] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:22.390 [2024-12-07 07:59:33.584893] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.390 [2024-12-07 07:59:33.585047] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.390 [2024-12-07 07:59:33.585281] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.390 [2024-12-07 07:59:33.585457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.390 [2024-12-07 07:59:33.585593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.390 [2024-12-07 07:59:33.585691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.390 [2024-12-07 07:59:33.585691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.325 07:59:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.325 07:59:34 -- common/autotest_common.sh@862 -- # return 0 00:08:23.325 07:59:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:23.325 07:59:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:23.325 07:59:34 -- common/autotest_common.sh@10 -- # set +x 00:08:23.325 07:59:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.325 07:59:34 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:23.325 07:59:34 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:23.325 07:59:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.325 07:59:34 -- common/autotest_common.sh@10 -- # set +x 00:08:23.325 [2024-12-07 07:59:34.392223] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:23.325 07:59:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.325 07:59:34 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:23.325 07:59:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.325 07:59:34 -- common/autotest_common.sh@10 -- # set +x 00:08:23.325 Malloc1 00:08:23.325 07:59:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.325 07:59:34 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:23.325 07:59:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.325 07:59:34 -- common/autotest_common.sh@10 -- # set +x 00:08:23.325 07:59:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.325 07:59:34 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:23.325 07:59:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.325 07:59:34 -- common/autotest_common.sh@10 -- # set +x 00:08:23.325 07:59:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.325 07:59:34 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.325 07:59:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.325 07:59:34 -- common/autotest_common.sh@10 -- # set +x 00:08:23.325 [2024-12-07 07:59:34.581175] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.325 07:59:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.325 07:59:34 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:23.325 07:59:34 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:23.325 07:59:34 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:23.325 07:59:34 -- common/autotest_common.sh@1369 -- # local bs 00:08:23.325 07:59:34 -- common/autotest_common.sh@1370 -- # local nb 00:08:23.325 07:59:34 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:23.325 07:59:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.325 07:59:34 -- common/autotest_common.sh@10 -- # set +x 00:08:23.583 07:59:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.583 07:59:34 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:23.583 { 00:08:23.583 "aliases": [ 00:08:23.583 "8b4eb4c6-eb30-4bf3-ba27-013266db6ba0" 00:08:23.583 ], 00:08:23.583 "assigned_rate_limits": { 00:08:23.583 "r_mbytes_per_sec": 0, 00:08:23.583 "rw_ios_per_sec": 0, 00:08:23.583 "rw_mbytes_per_sec": 0, 00:08:23.583 "w_mbytes_per_sec": 0 00:08:23.583 }, 00:08:23.583 "block_size": 512, 00:08:23.583 "claim_type": "exclusive_write", 00:08:23.583 "claimed": true, 00:08:23.583 "driver_specific": {}, 00:08:23.583 "memory_domains": [ 00:08:23.583 { 00:08:23.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.583 "dma_device_type": 2 00:08:23.583 } 00:08:23.583 ], 00:08:23.583 "name": "Malloc1", 00:08:23.583 "num_blocks": 1048576, 00:08:23.583 "product_name": "Malloc disk", 00:08:23.583 "supported_io_types": { 00:08:23.583 "abort": true, 00:08:23.583 "compare": false, 00:08:23.583 "compare_and_write": false, 00:08:23.583 "flush": true, 00:08:23.583 "nvme_admin": false, 00:08:23.583 "nvme_io": false, 00:08:23.583 "read": true, 00:08:23.583 "reset": true, 00:08:23.583 "unmap": true, 00:08:23.583 "write": true, 00:08:23.583 "write_zeroes": true 00:08:23.583 }, 00:08:23.583 "uuid": "8b4eb4c6-eb30-4bf3-ba27-013266db6ba0", 00:08:23.583 "zoned": false 00:08:23.583 } 00:08:23.583 ]' 00:08:23.583 07:59:34 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:23.583 07:59:34 -- common/autotest_common.sh@1372 -- # bs=512 00:08:23.583 07:59:34 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:23.583 07:59:34 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:23.583 07:59:34 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:23.583 07:59:34 -- common/autotest_common.sh@1377 -- # echo 512 00:08:23.583 07:59:34 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:23.583 07:59:34 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:23.842 07:59:34 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:23.842 07:59:34 -- common/autotest_common.sh@1187 -- # local i=0 00:08:23.842 07:59:34 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:23.842 07:59:34 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:23.842 07:59:34 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:25.739 07:59:36 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:25.739 07:59:36 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:25.739 07:59:36 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:25.739 07:59:36 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:25.739 07:59:36 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:25.739 07:59:36 -- common/autotest_common.sh@1197 -- # return 0 00:08:25.739 07:59:36 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:25.739 07:59:36 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:25.739 07:59:36 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:25.739 07:59:36 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:25.739 07:59:36 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:25.739 07:59:36 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:25.739 07:59:36 -- setup/common.sh@80 -- # echo 536870912 00:08:25.739 07:59:36 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:25.739 07:59:36 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:25.739 07:59:36 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:25.739 07:59:36 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:25.739 07:59:36 -- target/filesystem.sh@69 -- # partprobe 00:08:25.739 07:59:37 -- target/filesystem.sh@70 -- # sleep 1 00:08:27.153 07:59:38 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:27.153 07:59:38 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:27.153 07:59:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:27.153 07:59:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.153 07:59:38 -- common/autotest_common.sh@10 -- # set +x 00:08:27.153 ************************************ 00:08:27.153 START TEST filesystem_in_capsule_ext4 00:08:27.153 ************************************ 00:08:27.153 07:59:38 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:27.153 07:59:38 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:27.153 07:59:38 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:27.153 07:59:38 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:27.153 07:59:38 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:27.153 07:59:38 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:27.153 07:59:38 -- common/autotest_common.sh@914 -- # local i=0 00:08:27.153 07:59:38 -- common/autotest_common.sh@915 -- # local force 00:08:27.153 07:59:38 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:27.153 07:59:38 -- common/autotest_common.sh@918 -- # force=-F 00:08:27.153 07:59:38 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:27.153 mke2fs 1.47.0 (5-Feb-2023) 00:08:27.153 Discarding device blocks: 0/522240 done 00:08:27.153 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:27.153 Filesystem UUID: 347c35a6-b71b-47e8-b04e-6c6c1a45ec44 00:08:27.153 Superblock backups stored on blocks: 00:08:27.153 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:27.153 00:08:27.153 Allocating group tables: 0/64 done 00:08:27.153 Writing inode tables: 0/64 done 00:08:27.153 Creating journal (8192 blocks): done 00:08:27.153 Writing superblocks and filesystem accounting information: 0/64 done 00:08:27.153 00:08:27.153 07:59:38 -- common/autotest_common.sh@931 -- # return 0 00:08:27.153 07:59:38 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:32.407 07:59:43 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:32.407 07:59:43 -- target/filesystem.sh@25 -- # sync 00:08:32.407 07:59:43 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:32.407 07:59:43 -- target/filesystem.sh@27 -- # sync 00:08:32.407 07:59:43 -- target/filesystem.sh@29 -- # i=0 00:08:32.407 07:59:43 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:32.407 07:59:43 -- target/filesystem.sh@37 -- # kill -0 72913 00:08:32.407 07:59:43 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:32.407 07:59:43 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:32.407 07:59:43 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:32.407 07:59:43 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:32.407 ************************************ 00:08:32.407 END TEST filesystem_in_capsule_ext4 00:08:32.407 ************************************ 00:08:32.407 00:08:32.407 real 0m5.562s 00:08:32.407 user 0m0.020s 00:08:32.407 sys 0m0.060s 00:08:32.407 07:59:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.407 07:59:43 -- common/autotest_common.sh@10 -- # set +x 00:08:32.407 07:59:43 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:32.407 07:59:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:32.407 07:59:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.407 07:59:43 -- common/autotest_common.sh@10 -- # set +x 00:08:32.407 ************************************ 00:08:32.407 START TEST filesystem_in_capsule_btrfs 00:08:32.407 ************************************ 00:08:32.407 07:59:43 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:32.407 07:59:43 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:32.407 07:59:43 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:32.407 07:59:43 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:32.407 07:59:43 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:32.407 07:59:43 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:32.407 07:59:43 -- common/autotest_common.sh@914 -- # local i=0 00:08:32.407 07:59:43 -- common/autotest_common.sh@915 -- # local force 00:08:32.407 07:59:43 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:32.407 07:59:43 -- common/autotest_common.sh@920 -- # force=-f 00:08:32.407 07:59:43 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:32.666 btrfs-progs v6.8.1 00:08:32.666 See https://btrfs.readthedocs.io for more information. 00:08:32.666 00:08:32.666 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:32.666 NOTE: several default settings have changed in version 5.15, please make sure 00:08:32.666 this does not affect your deployments: 00:08:32.666 - DUP for metadata (-m dup) 00:08:32.666 - enabled no-holes (-O no-holes) 00:08:32.666 - enabled free-space-tree (-R free-space-tree) 00:08:32.666 00:08:32.666 Label: (null) 00:08:32.666 UUID: 48c23ddd-10ab-411a-adc4-03950a72d080 00:08:32.666 Node size: 16384 00:08:32.666 Sector size: 4096 (CPU page size: 4096) 00:08:32.666 Filesystem size: 510.00MiB 00:08:32.666 Block group profiles: 00:08:32.666 Data: single 8.00MiB 00:08:32.666 Metadata: DUP 32.00MiB 00:08:32.666 System: DUP 8.00MiB 00:08:32.666 SSD detected: yes 00:08:32.666 Zoned device: no 00:08:32.666 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:32.666 Checksum: crc32c 00:08:32.666 Number of devices: 1 00:08:32.666 Devices: 00:08:32.666 ID SIZE PATH 00:08:32.666 1 510.00MiB /dev/nvme0n1p1 00:08:32.666 00:08:32.666 07:59:43 -- common/autotest_common.sh@931 -- # return 0 00:08:32.666 07:59:43 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:32.666 07:59:43 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:32.666 07:59:43 -- target/filesystem.sh@25 -- # sync 00:08:32.666 07:59:43 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:32.666 07:59:43 -- target/filesystem.sh@27 -- # sync 00:08:32.666 07:59:43 -- target/filesystem.sh@29 -- # i=0 00:08:32.666 07:59:43 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:32.666 07:59:43 -- target/filesystem.sh@37 -- # kill -0 72913 00:08:32.666 07:59:43 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:32.666 07:59:43 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:32.666 07:59:43 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:32.666 07:59:43 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:32.666 ************************************ 00:08:32.666 END TEST filesystem_in_capsule_btrfs 00:08:32.666 ************************************ 00:08:32.666 00:08:32.666 real 0m0.220s 00:08:32.666 user 0m0.023s 00:08:32.666 sys 0m0.060s 00:08:32.666 07:59:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.666 07:59:43 -- common/autotest_common.sh@10 -- # set +x 00:08:32.666 07:59:43 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:32.666 07:59:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:32.666 07:59:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.666 07:59:43 -- common/autotest_common.sh@10 -- # set +x 00:08:32.666 ************************************ 00:08:32.666 START TEST filesystem_in_capsule_xfs 00:08:32.666 ************************************ 00:08:32.666 07:59:43 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:32.666 07:59:43 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:32.666 07:59:43 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:32.666 07:59:43 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:32.666 07:59:43 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:32.666 07:59:43 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:32.666 07:59:43 -- common/autotest_common.sh@914 -- # local i=0 00:08:32.666 07:59:43 -- common/autotest_common.sh@915 -- # local force 00:08:32.666 07:59:43 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:32.666 07:59:43 -- common/autotest_common.sh@920 -- # force=-f 00:08:32.666 07:59:43 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:32.925 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:32.925 = sectsz=512 attr=2, projid32bit=1 00:08:32.925 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:32.925 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:32.925 data = bsize=4096 blocks=130560, imaxpct=25 00:08:32.925 = sunit=0 swidth=0 blks 00:08:32.925 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:32.925 log =internal log bsize=4096 blocks=16384, version=2 00:08:32.925 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:32.925 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:33.492 Discarding blocks...Done. 00:08:33.492 07:59:44 -- common/autotest_common.sh@931 -- # return 0 00:08:33.492 07:59:44 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:35.389 07:59:46 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:35.389 07:59:46 -- target/filesystem.sh@25 -- # sync 00:08:35.389 07:59:46 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:35.389 07:59:46 -- target/filesystem.sh@27 -- # sync 00:08:35.389 07:59:46 -- target/filesystem.sh@29 -- # i=0 00:08:35.389 07:59:46 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:35.389 07:59:46 -- target/filesystem.sh@37 -- # kill -0 72913 00:08:35.389 07:59:46 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:35.390 07:59:46 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:35.390 07:59:46 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:35.390 07:59:46 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:35.390 ************************************ 00:08:35.390 END TEST filesystem_in_capsule_xfs 00:08:35.390 ************************************ 00:08:35.390 00:08:35.390 real 0m2.630s 00:08:35.390 user 0m0.017s 00:08:35.390 sys 0m0.063s 00:08:35.390 07:59:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.390 07:59:46 -- common/autotest_common.sh@10 -- # set +x 00:08:35.390 07:59:46 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:35.390 07:59:46 -- target/filesystem.sh@93 -- # sync 00:08:35.390 07:59:46 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:35.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.390 07:59:46 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:35.390 07:59:46 -- common/autotest_common.sh@1208 -- # local i=0 00:08:35.390 07:59:46 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:35.390 07:59:46 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:35.647 07:59:46 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:35.647 07:59:46 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:35.647 07:59:46 -- common/autotest_common.sh@1220 -- # return 0 00:08:35.647 07:59:46 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:35.647 07:59:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.647 07:59:46 -- common/autotest_common.sh@10 -- # set +x 00:08:35.647 07:59:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.647 07:59:46 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:35.647 07:59:46 -- target/filesystem.sh@101 -- # killprocess 72913 00:08:35.647 07:59:46 -- common/autotest_common.sh@936 -- # '[' -z 72913 ']' 00:08:35.647 07:59:46 -- common/autotest_common.sh@940 -- # kill -0 72913 00:08:35.647 07:59:46 -- common/autotest_common.sh@941 -- # uname 00:08:35.647 07:59:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:35.647 07:59:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72913 00:08:35.647 killing process with pid 72913 00:08:35.647 07:59:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:35.647 07:59:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:35.647 07:59:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72913' 00:08:35.648 07:59:46 -- common/autotest_common.sh@955 -- # kill 72913 00:08:35.648 07:59:46 -- common/autotest_common.sh@960 -- # wait 72913 00:08:35.906 ************************************ 00:08:35.906 END TEST nvmf_filesystem_in_capsule 00:08:35.906 ************************************ 00:08:35.906 07:59:47 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:35.906 00:08:35.906 real 0m13.815s 00:08:35.906 user 0m52.836s 00:08:35.906 sys 0m2.067s 00:08:35.906 07:59:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.906 07:59:47 -- common/autotest_common.sh@10 -- # set +x 00:08:35.906 07:59:47 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:35.906 07:59:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:35.906 07:59:47 -- nvmf/common.sh@116 -- # sync 00:08:36.165 07:59:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:36.165 07:59:47 -- nvmf/common.sh@119 -- # set +e 00:08:36.165 07:59:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:36.165 07:59:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:36.165 rmmod nvme_tcp 00:08:36.165 rmmod nvme_fabrics 00:08:36.165 rmmod nvme_keyring 00:08:36.165 07:59:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:36.165 07:59:47 -- nvmf/common.sh@123 -- # set -e 00:08:36.165 07:59:47 -- nvmf/common.sh@124 -- # return 0 00:08:36.165 07:59:47 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:36.165 07:59:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:36.165 07:59:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:36.165 07:59:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:36.165 07:59:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:36.165 07:59:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:36.165 07:59:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.165 07:59:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.165 07:59:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.165 07:59:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:36.165 00:08:36.165 real 0m29.286s 00:08:36.165 user 1m48.689s 00:08:36.165 sys 0m4.555s 00:08:36.165 ************************************ 00:08:36.165 END TEST nvmf_filesystem 00:08:36.165 07:59:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:36.165 07:59:47 -- common/autotest_common.sh@10 -- # set +x 00:08:36.165 ************************************ 00:08:36.165 07:59:47 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:36.165 07:59:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:36.165 07:59:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.165 07:59:47 -- common/autotest_common.sh@10 -- # set +x 00:08:36.165 ************************************ 00:08:36.165 START TEST nvmf_discovery 00:08:36.165 ************************************ 00:08:36.165 07:59:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:36.165 * Looking for test storage... 00:08:36.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:36.165 07:59:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:36.165 07:59:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:36.165 07:59:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:36.424 07:59:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:36.424 07:59:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:36.424 07:59:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:36.424 07:59:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:36.424 07:59:47 -- scripts/common.sh@335 -- # IFS=.-: 00:08:36.424 07:59:47 -- scripts/common.sh@335 -- # read -ra ver1 00:08:36.424 07:59:47 -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.424 07:59:47 -- scripts/common.sh@336 -- # read -ra ver2 00:08:36.424 07:59:47 -- scripts/common.sh@337 -- # local 'op=<' 00:08:36.424 07:59:47 -- scripts/common.sh@339 -- # ver1_l=2 00:08:36.424 07:59:47 -- scripts/common.sh@340 -- # ver2_l=1 00:08:36.424 07:59:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:36.424 07:59:47 -- scripts/common.sh@343 -- # case "$op" in 00:08:36.424 07:59:47 -- scripts/common.sh@344 -- # : 1 00:08:36.424 07:59:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:36.424 07:59:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.424 07:59:47 -- scripts/common.sh@364 -- # decimal 1 00:08:36.424 07:59:47 -- scripts/common.sh@352 -- # local d=1 00:08:36.424 07:59:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.424 07:59:47 -- scripts/common.sh@354 -- # echo 1 00:08:36.424 07:59:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:36.424 07:59:47 -- scripts/common.sh@365 -- # decimal 2 00:08:36.424 07:59:47 -- scripts/common.sh@352 -- # local d=2 00:08:36.424 07:59:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.424 07:59:47 -- scripts/common.sh@354 -- # echo 2 00:08:36.424 07:59:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:36.424 07:59:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:36.424 07:59:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:36.424 07:59:47 -- scripts/common.sh@367 -- # return 0 00:08:36.424 07:59:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.424 07:59:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:36.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.424 --rc genhtml_branch_coverage=1 00:08:36.424 --rc genhtml_function_coverage=1 00:08:36.424 --rc genhtml_legend=1 00:08:36.424 --rc geninfo_all_blocks=1 00:08:36.424 --rc geninfo_unexecuted_blocks=1 00:08:36.424 00:08:36.424 ' 00:08:36.424 07:59:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:36.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.424 --rc genhtml_branch_coverage=1 00:08:36.424 --rc genhtml_function_coverage=1 00:08:36.424 --rc genhtml_legend=1 00:08:36.424 --rc geninfo_all_blocks=1 00:08:36.424 --rc geninfo_unexecuted_blocks=1 00:08:36.424 00:08:36.424 ' 00:08:36.424 07:59:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:36.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.424 --rc genhtml_branch_coverage=1 00:08:36.424 --rc genhtml_function_coverage=1 00:08:36.424 --rc genhtml_legend=1 00:08:36.424 --rc geninfo_all_blocks=1 00:08:36.424 --rc geninfo_unexecuted_blocks=1 00:08:36.424 00:08:36.424 ' 00:08:36.424 07:59:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:36.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.424 --rc genhtml_branch_coverage=1 00:08:36.424 --rc genhtml_function_coverage=1 00:08:36.424 --rc genhtml_legend=1 00:08:36.424 --rc geninfo_all_blocks=1 00:08:36.424 --rc geninfo_unexecuted_blocks=1 00:08:36.424 00:08:36.424 ' 00:08:36.424 07:59:47 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:36.424 07:59:47 -- nvmf/common.sh@7 -- # uname -s 00:08:36.424 07:59:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.424 07:59:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.424 07:59:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.424 07:59:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.424 07:59:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.424 07:59:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.424 07:59:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.424 07:59:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.424 07:59:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.424 07:59:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.424 07:59:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:08:36.424 07:59:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:08:36.424 07:59:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.424 07:59:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.424 07:59:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:36.424 07:59:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:36.424 07:59:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.424 07:59:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.424 07:59:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.424 07:59:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.425 07:59:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.425 07:59:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.425 07:59:47 -- paths/export.sh@5 -- # export PATH 00:08:36.425 07:59:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.425 07:59:47 -- nvmf/common.sh@46 -- # : 0 00:08:36.425 07:59:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:36.425 07:59:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:36.425 07:59:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:36.425 07:59:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.425 07:59:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.425 07:59:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:36.425 07:59:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:36.425 07:59:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:36.425 07:59:47 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:36.425 07:59:47 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:36.425 07:59:47 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:36.425 07:59:47 -- target/discovery.sh@15 -- # hash nvme 00:08:36.425 07:59:47 -- target/discovery.sh@20 -- # nvmftestinit 00:08:36.425 07:59:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:36.425 07:59:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.425 07:59:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:36.425 07:59:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:36.425 07:59:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:36.425 07:59:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.425 07:59:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.425 07:59:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.425 07:59:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:36.425 07:59:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:36.425 07:59:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:36.425 07:59:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:36.425 07:59:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:36.425 07:59:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:36.425 07:59:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.425 07:59:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:36.425 07:59:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:36.425 07:59:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:36.425 07:59:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:36.425 07:59:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:36.425 07:59:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:36.425 07:59:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.425 07:59:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:36.425 07:59:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:36.425 07:59:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:36.425 07:59:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:36.425 07:59:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:36.425 07:59:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:36.425 Cannot find device "nvmf_tgt_br" 00:08:36.425 07:59:47 -- nvmf/common.sh@154 -- # true 00:08:36.425 07:59:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:36.425 Cannot find device "nvmf_tgt_br2" 00:08:36.425 07:59:47 -- nvmf/common.sh@155 -- # true 00:08:36.425 07:59:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:36.425 07:59:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:36.425 Cannot find device "nvmf_tgt_br" 00:08:36.425 07:59:47 -- nvmf/common.sh@157 -- # true 00:08:36.425 07:59:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:36.425 Cannot find device "nvmf_tgt_br2" 00:08:36.425 07:59:47 -- nvmf/common.sh@158 -- # true 00:08:36.425 07:59:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:36.425 07:59:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:36.425 07:59:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:36.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:36.425 07:59:47 -- nvmf/common.sh@161 -- # true 00:08:36.425 07:59:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:36.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:36.425 07:59:47 -- nvmf/common.sh@162 -- # true 00:08:36.425 07:59:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:36.425 07:59:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:36.683 07:59:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:36.683 07:59:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:36.683 07:59:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:36.683 07:59:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:36.683 07:59:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:36.683 07:59:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:36.683 07:59:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:36.683 07:59:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:36.683 07:59:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:36.683 07:59:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:36.683 07:59:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:36.683 07:59:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:36.683 07:59:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:36.683 07:59:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:36.683 07:59:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:36.683 07:59:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:36.683 07:59:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:36.683 07:59:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:36.683 07:59:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:36.683 07:59:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:36.683 07:59:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:36.683 07:59:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:36.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:08:36.683 00:08:36.683 --- 10.0.0.2 ping statistics --- 00:08:36.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.683 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:08:36.683 07:59:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:36.683 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:36.683 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:08:36.683 00:08:36.683 --- 10.0.0.3 ping statistics --- 00:08:36.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.683 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:36.683 07:59:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:36.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:36.683 00:08:36.683 --- 10.0.0.1 ping statistics --- 00:08:36.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.683 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:36.683 07:59:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.683 07:59:47 -- nvmf/common.sh@421 -- # return 0 00:08:36.683 07:59:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:36.683 07:59:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.683 07:59:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:36.683 07:59:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:36.683 07:59:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.683 07:59:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:36.683 07:59:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:36.683 07:59:47 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:36.683 07:59:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:36.683 07:59:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:36.683 07:59:47 -- common/autotest_common.sh@10 -- # set +x 00:08:36.683 07:59:47 -- nvmf/common.sh@469 -- # nvmfpid=73459 00:08:36.683 07:59:47 -- nvmf/common.sh@470 -- # waitforlisten 73459 00:08:36.683 07:59:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:36.683 07:59:47 -- common/autotest_common.sh@829 -- # '[' -z 73459 ']' 00:08:36.683 07:59:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.683 07:59:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:36.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.683 07:59:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.683 07:59:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:36.683 07:59:47 -- common/autotest_common.sh@10 -- # set +x 00:08:36.683 [2024-12-07 07:59:47.940903] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:36.683 [2024-12-07 07:59:47.941257] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.941 [2024-12-07 07:59:48.085016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.941 [2024-12-07 07:59:48.178875] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:36.941 [2024-12-07 07:59:48.179055] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.941 [2024-12-07 07:59:48.179075] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.941 [2024-12-07 07:59:48.179086] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.941 [2024-12-07 07:59:48.179265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.941 [2024-12-07 07:59:48.179529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.941 [2024-12-07 07:59:48.180269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.941 [2024-12-07 07:59:48.180520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.875 07:59:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:37.875 07:59:48 -- common/autotest_common.sh@862 -- # return 0 00:08:37.875 07:59:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:37.875 07:59:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:37.875 07:59:48 -- common/autotest_common.sh@10 -- # set +x 00:08:37.875 07:59:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.875 07:59:49 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:37.875 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.875 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:37.875 [2024-12-07 07:59:49.026928] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.875 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.875 07:59:49 -- target/discovery.sh@26 -- # seq 1 4 00:08:37.875 07:59:49 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:37.875 07:59:49 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:37.875 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.875 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:37.875 Null1 00:08:37.875 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.875 07:59:49 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:37.875 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.875 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:37.875 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.875 07:59:49 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:37.875 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.875 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:37.875 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.875 07:59:49 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.875 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.875 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:37.875 [2024-12-07 07:59:49.092716] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.875 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.875 07:59:49 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:37.875 07:59:49 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:37.875 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.875 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:37.875 Null2 00:08:37.875 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.875 07:59:49 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:37.875 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.875 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:37.875 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.875 07:59:49 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:37.875 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.875 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:37.875 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.875 07:59:49 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:37.875 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.875 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:37.875 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.875 07:59:49 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:37.875 07:59:49 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:37.875 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.875 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:37.875 Null3 00:08:37.875 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.875 07:59:49 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:37.875 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.875 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:37.875 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.875 07:59:49 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:37.875 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.875 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.134 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.134 07:59:49 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:38.134 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.134 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.134 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.134 07:59:49 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:38.134 07:59:49 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:38.134 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.134 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.134 Null4 00:08:38.134 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.134 07:59:49 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:38.134 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.134 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.134 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.134 07:59:49 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:38.134 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.134 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.134 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.134 07:59:49 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:38.134 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.134 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.134 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.134 07:59:49 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:38.134 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.134 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.134 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.134 07:59:49 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:38.134 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.134 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.134 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.134 07:59:49 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -a 10.0.0.2 -s 4420 00:08:38.134 00:08:38.134 Discovery Log Number of Records 6, Generation counter 6 00:08:38.134 =====Discovery Log Entry 0====== 00:08:38.134 trtype: tcp 00:08:38.134 adrfam: ipv4 00:08:38.134 subtype: current discovery subsystem 00:08:38.134 treq: not required 00:08:38.134 portid: 0 00:08:38.134 trsvcid: 4420 00:08:38.134 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:38.134 traddr: 10.0.0.2 00:08:38.134 eflags: explicit discovery connections, duplicate discovery information 00:08:38.134 sectype: none 00:08:38.134 =====Discovery Log Entry 1====== 00:08:38.134 trtype: tcp 00:08:38.134 adrfam: ipv4 00:08:38.134 subtype: nvme subsystem 00:08:38.134 treq: not required 00:08:38.134 portid: 0 00:08:38.134 trsvcid: 4420 00:08:38.134 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:38.134 traddr: 10.0.0.2 00:08:38.134 eflags: none 00:08:38.134 sectype: none 00:08:38.134 =====Discovery Log Entry 2====== 00:08:38.134 trtype: tcp 00:08:38.134 adrfam: ipv4 00:08:38.134 subtype: nvme subsystem 00:08:38.134 treq: not required 00:08:38.134 portid: 0 00:08:38.134 trsvcid: 4420 00:08:38.134 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:38.134 traddr: 10.0.0.2 00:08:38.134 eflags: none 00:08:38.134 sectype: none 00:08:38.134 =====Discovery Log Entry 3====== 00:08:38.134 trtype: tcp 00:08:38.134 adrfam: ipv4 00:08:38.134 subtype: nvme subsystem 00:08:38.134 treq: not required 00:08:38.134 portid: 0 00:08:38.134 trsvcid: 4420 00:08:38.134 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:38.134 traddr: 10.0.0.2 00:08:38.134 eflags: none 00:08:38.134 sectype: none 00:08:38.134 =====Discovery Log Entry 4====== 00:08:38.134 trtype: tcp 00:08:38.134 adrfam: ipv4 00:08:38.134 subtype: nvme subsystem 00:08:38.134 treq: not required 00:08:38.134 portid: 0 00:08:38.134 trsvcid: 4420 00:08:38.134 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:38.134 traddr: 10.0.0.2 00:08:38.134 eflags: none 00:08:38.134 sectype: none 00:08:38.134 =====Discovery Log Entry 5====== 00:08:38.134 trtype: tcp 00:08:38.134 adrfam: ipv4 00:08:38.134 subtype: discovery subsystem referral 00:08:38.134 treq: not required 00:08:38.134 portid: 0 00:08:38.134 trsvcid: 4430 00:08:38.134 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:38.134 traddr: 10.0.0.2 00:08:38.134 eflags: none 00:08:38.134 sectype: none 00:08:38.134 Perform nvmf subsystem discovery via RPC 00:08:38.134 07:59:49 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:38.134 07:59:49 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:38.134 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.134 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.134 [2024-12-07 07:59:49.324751] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:38.134 [ 00:08:38.134 { 00:08:38.134 "allow_any_host": true, 00:08:38.134 "hosts": [], 00:08:38.134 "listen_addresses": [ 00:08:38.134 { 00:08:38.134 "adrfam": "IPv4", 00:08:38.134 "traddr": "10.0.0.2", 00:08:38.134 "transport": "TCP", 00:08:38.134 "trsvcid": "4420", 00:08:38.134 "trtype": "TCP" 00:08:38.134 } 00:08:38.134 ], 00:08:38.134 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:38.134 "subtype": "Discovery" 00:08:38.134 }, 00:08:38.134 { 00:08:38.134 "allow_any_host": true, 00:08:38.134 "hosts": [], 00:08:38.134 "listen_addresses": [ 00:08:38.134 { 00:08:38.134 "adrfam": "IPv4", 00:08:38.134 "traddr": "10.0.0.2", 00:08:38.134 "transport": "TCP", 00:08:38.134 "trsvcid": "4420", 00:08:38.134 "trtype": "TCP" 00:08:38.134 } 00:08:38.134 ], 00:08:38.134 "max_cntlid": 65519, 00:08:38.134 "max_namespaces": 32, 00:08:38.134 "min_cntlid": 1, 00:08:38.134 "model_number": "SPDK bdev Controller", 00:08:38.134 "namespaces": [ 00:08:38.134 { 00:08:38.134 "bdev_name": "Null1", 00:08:38.134 "name": "Null1", 00:08:38.134 "nguid": "7670C1B426BD4FE48A85E0CE774CE38A", 00:08:38.134 "nsid": 1, 00:08:38.134 "uuid": "7670c1b4-26bd-4fe4-8a85-e0ce774ce38a" 00:08:38.134 } 00:08:38.134 ], 00:08:38.134 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:38.134 "serial_number": "SPDK00000000000001", 00:08:38.134 "subtype": "NVMe" 00:08:38.134 }, 00:08:38.134 { 00:08:38.134 "allow_any_host": true, 00:08:38.134 "hosts": [], 00:08:38.134 "listen_addresses": [ 00:08:38.134 { 00:08:38.134 "adrfam": "IPv4", 00:08:38.134 "traddr": "10.0.0.2", 00:08:38.134 "transport": "TCP", 00:08:38.134 "trsvcid": "4420", 00:08:38.134 "trtype": "TCP" 00:08:38.134 } 00:08:38.134 ], 00:08:38.134 "max_cntlid": 65519, 00:08:38.134 "max_namespaces": 32, 00:08:38.134 "min_cntlid": 1, 00:08:38.134 "model_number": "SPDK bdev Controller", 00:08:38.134 "namespaces": [ 00:08:38.134 { 00:08:38.134 "bdev_name": "Null2", 00:08:38.134 "name": "Null2", 00:08:38.134 "nguid": "0BA79D8C7BAE4BFD91154FCAB8F6323E", 00:08:38.134 "nsid": 1, 00:08:38.134 "uuid": "0ba79d8c-7bae-4bfd-9115-4fcab8f6323e" 00:08:38.134 } 00:08:38.134 ], 00:08:38.134 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:38.134 "serial_number": "SPDK00000000000002", 00:08:38.134 "subtype": "NVMe" 00:08:38.134 }, 00:08:38.134 { 00:08:38.134 "allow_any_host": true, 00:08:38.134 "hosts": [], 00:08:38.134 "listen_addresses": [ 00:08:38.134 { 00:08:38.134 "adrfam": "IPv4", 00:08:38.134 "traddr": "10.0.0.2", 00:08:38.134 "transport": "TCP", 00:08:38.134 "trsvcid": "4420", 00:08:38.134 "trtype": "TCP" 00:08:38.134 } 00:08:38.134 ], 00:08:38.134 "max_cntlid": 65519, 00:08:38.134 "max_namespaces": 32, 00:08:38.135 "min_cntlid": 1, 00:08:38.135 "model_number": "SPDK bdev Controller", 00:08:38.135 "namespaces": [ 00:08:38.135 { 00:08:38.135 "bdev_name": "Null3", 00:08:38.135 "name": "Null3", 00:08:38.135 "nguid": "90A5F4FBA671414C8C7E91C3F0E18CDB", 00:08:38.135 "nsid": 1, 00:08:38.135 "uuid": "90a5f4fb-a671-414c-8c7e-91c3f0e18cdb" 00:08:38.135 } 00:08:38.135 ], 00:08:38.135 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:38.135 "serial_number": "SPDK00000000000003", 00:08:38.135 "subtype": "NVMe" 00:08:38.135 }, 00:08:38.135 { 00:08:38.135 "allow_any_host": true, 00:08:38.135 "hosts": [], 00:08:38.135 "listen_addresses": [ 00:08:38.135 { 00:08:38.135 "adrfam": "IPv4", 00:08:38.135 "traddr": "10.0.0.2", 00:08:38.135 "transport": "TCP", 00:08:38.135 "trsvcid": "4420", 00:08:38.135 "trtype": "TCP" 00:08:38.135 } 00:08:38.135 ], 00:08:38.135 "max_cntlid": 65519, 00:08:38.135 "max_namespaces": 32, 00:08:38.135 "min_cntlid": 1, 00:08:38.135 "model_number": "SPDK bdev Controller", 00:08:38.135 "namespaces": [ 00:08:38.135 { 00:08:38.135 "bdev_name": "Null4", 00:08:38.135 "name": "Null4", 00:08:38.135 "nguid": "FE1BBC65D46B495385D1031AF0C730B7", 00:08:38.135 "nsid": 1, 00:08:38.135 "uuid": "fe1bbc65-d46b-4953-85d1-031af0c730b7" 00:08:38.135 } 00:08:38.135 ], 00:08:38.135 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:38.135 "serial_number": "SPDK00000000000004", 00:08:38.135 "subtype": "NVMe" 00:08:38.135 } 00:08:38.135 ] 00:08:38.135 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.135 07:59:49 -- target/discovery.sh@42 -- # seq 1 4 00:08:38.135 07:59:49 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:38.135 07:59:49 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:38.135 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.135 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.135 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.135 07:59:49 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:38.135 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.135 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.135 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.135 07:59:49 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:38.135 07:59:49 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:38.135 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.135 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.135 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.135 07:59:49 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:38.135 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.135 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.135 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.135 07:59:49 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:38.135 07:59:49 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:38.135 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.135 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.135 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.135 07:59:49 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:38.135 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.135 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.393 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.393 07:59:49 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:38.393 07:59:49 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:38.393 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.393 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.393 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.393 07:59:49 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:38.393 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.393 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.393 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.393 07:59:49 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:38.393 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.393 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.393 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.393 07:59:49 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:38.393 07:59:49 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:38.393 07:59:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.393 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.393 07:59:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.393 07:59:49 -- target/discovery.sh@49 -- # check_bdevs= 00:08:38.393 07:59:49 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:38.393 07:59:49 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:38.393 07:59:49 -- target/discovery.sh@57 -- # nvmftestfini 00:08:38.393 07:59:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:38.393 07:59:49 -- nvmf/common.sh@116 -- # sync 00:08:38.393 07:59:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:38.393 07:59:49 -- nvmf/common.sh@119 -- # set +e 00:08:38.393 07:59:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:38.393 07:59:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:38.393 rmmod nvme_tcp 00:08:38.393 rmmod nvme_fabrics 00:08:38.393 rmmod nvme_keyring 00:08:38.393 07:59:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:38.393 07:59:49 -- nvmf/common.sh@123 -- # set -e 00:08:38.393 07:59:49 -- nvmf/common.sh@124 -- # return 0 00:08:38.393 07:59:49 -- nvmf/common.sh@477 -- # '[' -n 73459 ']' 00:08:38.393 07:59:49 -- nvmf/common.sh@478 -- # killprocess 73459 00:08:38.393 07:59:49 -- common/autotest_common.sh@936 -- # '[' -z 73459 ']' 00:08:38.393 07:59:49 -- common/autotest_common.sh@940 -- # kill -0 73459 00:08:38.393 07:59:49 -- common/autotest_common.sh@941 -- # uname 00:08:38.393 07:59:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:38.393 07:59:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73459 00:08:38.393 killing process with pid 73459 00:08:38.393 07:59:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:38.393 07:59:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:38.393 07:59:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73459' 00:08:38.393 07:59:49 -- common/autotest_common.sh@955 -- # kill 73459 00:08:38.393 [2024-12-07 07:59:49.603417] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:38.393 07:59:49 -- common/autotest_common.sh@960 -- # wait 73459 00:08:38.651 07:59:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:38.651 07:59:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:38.651 07:59:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:38.651 07:59:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:38.651 07:59:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:38.651 07:59:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.651 07:59:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.651 07:59:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.651 07:59:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:38.651 00:08:38.651 real 0m2.501s 00:08:38.651 user 0m6.977s 00:08:38.651 sys 0m0.615s 00:08:38.651 07:59:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:38.651 ************************************ 00:08:38.651 END TEST nvmf_discovery 00:08:38.651 ************************************ 00:08:38.651 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.651 07:59:49 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:38.651 07:59:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:38.651 07:59:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:38.651 07:59:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.651 ************************************ 00:08:38.651 START TEST nvmf_referrals 00:08:38.651 ************************************ 00:08:38.651 07:59:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:38.910 * Looking for test storage... 00:08:38.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:38.910 07:59:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:38.910 07:59:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:38.910 07:59:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:38.910 07:59:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:38.910 07:59:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:38.910 07:59:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:38.910 07:59:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:38.910 07:59:50 -- scripts/common.sh@335 -- # IFS=.-: 00:08:38.910 07:59:50 -- scripts/common.sh@335 -- # read -ra ver1 00:08:38.910 07:59:50 -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.910 07:59:50 -- scripts/common.sh@336 -- # read -ra ver2 00:08:38.910 07:59:50 -- scripts/common.sh@337 -- # local 'op=<' 00:08:38.910 07:59:50 -- scripts/common.sh@339 -- # ver1_l=2 00:08:38.910 07:59:50 -- scripts/common.sh@340 -- # ver2_l=1 00:08:38.910 07:59:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:38.910 07:59:50 -- scripts/common.sh@343 -- # case "$op" in 00:08:38.910 07:59:50 -- scripts/common.sh@344 -- # : 1 00:08:38.910 07:59:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:38.910 07:59:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.910 07:59:50 -- scripts/common.sh@364 -- # decimal 1 00:08:38.910 07:59:50 -- scripts/common.sh@352 -- # local d=1 00:08:38.910 07:59:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.910 07:59:50 -- scripts/common.sh@354 -- # echo 1 00:08:38.910 07:59:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:38.910 07:59:50 -- scripts/common.sh@365 -- # decimal 2 00:08:38.910 07:59:50 -- scripts/common.sh@352 -- # local d=2 00:08:38.910 07:59:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.910 07:59:50 -- scripts/common.sh@354 -- # echo 2 00:08:38.910 07:59:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:38.910 07:59:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:38.910 07:59:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:38.910 07:59:50 -- scripts/common.sh@367 -- # return 0 00:08:38.910 07:59:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.910 07:59:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:38.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.910 --rc genhtml_branch_coverage=1 00:08:38.910 --rc genhtml_function_coverage=1 00:08:38.910 --rc genhtml_legend=1 00:08:38.910 --rc geninfo_all_blocks=1 00:08:38.910 --rc geninfo_unexecuted_blocks=1 00:08:38.910 00:08:38.910 ' 00:08:38.910 07:59:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:38.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.910 --rc genhtml_branch_coverage=1 00:08:38.910 --rc genhtml_function_coverage=1 00:08:38.910 --rc genhtml_legend=1 00:08:38.910 --rc geninfo_all_blocks=1 00:08:38.910 --rc geninfo_unexecuted_blocks=1 00:08:38.910 00:08:38.910 ' 00:08:38.910 07:59:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:38.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.910 --rc genhtml_branch_coverage=1 00:08:38.910 --rc genhtml_function_coverage=1 00:08:38.910 --rc genhtml_legend=1 00:08:38.910 --rc geninfo_all_blocks=1 00:08:38.910 --rc geninfo_unexecuted_blocks=1 00:08:38.910 00:08:38.910 ' 00:08:38.910 07:59:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:38.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.910 --rc genhtml_branch_coverage=1 00:08:38.910 --rc genhtml_function_coverage=1 00:08:38.910 --rc genhtml_legend=1 00:08:38.910 --rc geninfo_all_blocks=1 00:08:38.910 --rc geninfo_unexecuted_blocks=1 00:08:38.910 00:08:38.910 ' 00:08:38.910 07:59:50 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:38.910 07:59:50 -- nvmf/common.sh@7 -- # uname -s 00:08:38.910 07:59:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.910 07:59:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.910 07:59:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.910 07:59:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.910 07:59:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.910 07:59:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.910 07:59:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.910 07:59:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.910 07:59:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.910 07:59:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.910 07:59:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:08:38.910 07:59:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:08:38.910 07:59:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.910 07:59:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.910 07:59:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:38.910 07:59:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:38.910 07:59:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.910 07:59:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.910 07:59:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.910 07:59:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.910 07:59:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.911 07:59:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.911 07:59:50 -- paths/export.sh@5 -- # export PATH 00:08:38.911 07:59:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.911 07:59:50 -- nvmf/common.sh@46 -- # : 0 00:08:38.911 07:59:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:38.911 07:59:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:38.911 07:59:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:38.911 07:59:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.911 07:59:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.911 07:59:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:38.911 07:59:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:38.911 07:59:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:38.911 07:59:50 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:38.911 07:59:50 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:38.911 07:59:50 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:38.911 07:59:50 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:38.911 07:59:50 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:38.911 07:59:50 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:38.911 07:59:50 -- target/referrals.sh@37 -- # nvmftestinit 00:08:38.911 07:59:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:38.911 07:59:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.911 07:59:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:38.911 07:59:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:38.911 07:59:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:38.911 07:59:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.911 07:59:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.911 07:59:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.911 07:59:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:38.911 07:59:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:38.911 07:59:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:38.911 07:59:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:38.911 07:59:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:38.911 07:59:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:38.911 07:59:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.911 07:59:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.911 07:59:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:38.911 07:59:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:38.911 07:59:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:38.911 07:59:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:38.911 07:59:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:38.911 07:59:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.911 07:59:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:38.911 07:59:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:38.911 07:59:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:38.911 07:59:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:38.911 07:59:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:38.911 07:59:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:38.911 Cannot find device "nvmf_tgt_br" 00:08:38.911 07:59:50 -- nvmf/common.sh@154 -- # true 00:08:38.911 07:59:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:38.911 Cannot find device "nvmf_tgt_br2" 00:08:38.911 07:59:50 -- nvmf/common.sh@155 -- # true 00:08:38.911 07:59:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:38.911 07:59:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:38.911 Cannot find device "nvmf_tgt_br" 00:08:38.911 07:59:50 -- nvmf/common.sh@157 -- # true 00:08:38.911 07:59:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:38.911 Cannot find device "nvmf_tgt_br2" 00:08:38.911 07:59:50 -- nvmf/common.sh@158 -- # true 00:08:38.911 07:59:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:39.169 07:59:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:39.169 07:59:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:39.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:39.169 07:59:50 -- nvmf/common.sh@161 -- # true 00:08:39.169 07:59:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:39.169 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:39.169 07:59:50 -- nvmf/common.sh@162 -- # true 00:08:39.169 07:59:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:39.169 07:59:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:39.169 07:59:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:39.169 07:59:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:39.169 07:59:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:39.169 07:59:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:39.169 07:59:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:39.169 07:59:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:39.169 07:59:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:39.169 07:59:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:39.169 07:59:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:39.169 07:59:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:39.169 07:59:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:39.169 07:59:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:39.169 07:59:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:39.169 07:59:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:39.169 07:59:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:39.169 07:59:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:39.169 07:59:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:39.169 07:59:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:39.169 07:59:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:39.169 07:59:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:39.169 07:59:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:39.169 07:59:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:39.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:08:39.169 00:08:39.169 --- 10.0.0.2 ping statistics --- 00:08:39.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.169 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:39.169 07:59:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:39.169 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:39.169 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:08:39.169 00:08:39.170 --- 10.0.0.3 ping statistics --- 00:08:39.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.170 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:39.170 07:59:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:39.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:08:39.170 00:08:39.170 --- 10.0.0.1 ping statistics --- 00:08:39.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.170 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:08:39.170 07:59:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.170 07:59:50 -- nvmf/common.sh@421 -- # return 0 00:08:39.170 07:59:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:39.170 07:59:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.170 07:59:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:39.170 07:59:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:39.170 07:59:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.170 07:59:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:39.170 07:59:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:39.170 07:59:50 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:39.170 07:59:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:39.170 07:59:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:39.170 07:59:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.170 07:59:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:39.170 07:59:50 -- nvmf/common.sh@469 -- # nvmfpid=73688 00:08:39.170 07:59:50 -- nvmf/common.sh@470 -- # waitforlisten 73688 00:08:39.170 07:59:50 -- common/autotest_common.sh@829 -- # '[' -z 73688 ']' 00:08:39.170 07:59:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.170 07:59:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:39.170 07:59:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.170 07:59:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:39.170 07:59:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.428 [2024-12-07 07:59:50.486478] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:39.428 [2024-12-07 07:59:50.486590] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.428 [2024-12-07 07:59:50.627630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.686 [2024-12-07 07:59:50.708991] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:39.686 [2024-12-07 07:59:50.709488] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.686 [2024-12-07 07:59:50.709764] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.686 [2024-12-07 07:59:50.709964] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.686 [2024-12-07 07:59:50.710310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.686 [2024-12-07 07:59:50.710379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.686 [2024-12-07 07:59:50.710430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.686 [2024-12-07 07:59:50.710429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.250 07:59:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.251 07:59:51 -- common/autotest_common.sh@862 -- # return 0 00:08:40.251 07:59:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:40.251 07:59:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:40.251 07:59:51 -- common/autotest_common.sh@10 -- # set +x 00:08:40.508 07:59:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.508 07:59:51 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.508 07:59:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.508 07:59:51 -- common/autotest_common.sh@10 -- # set +x 00:08:40.508 [2024-12-07 07:59:51.564932] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.508 07:59:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.508 07:59:51 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:40.508 07:59:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.508 07:59:51 -- common/autotest_common.sh@10 -- # set +x 00:08:40.508 [2024-12-07 07:59:51.602676] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:40.508 07:59:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.508 07:59:51 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:40.508 07:59:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.508 07:59:51 -- common/autotest_common.sh@10 -- # set +x 00:08:40.508 07:59:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.508 07:59:51 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:40.508 07:59:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.508 07:59:51 -- common/autotest_common.sh@10 -- # set +x 00:08:40.508 07:59:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.508 07:59:51 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:40.508 07:59:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.508 07:59:51 -- common/autotest_common.sh@10 -- # set +x 00:08:40.508 07:59:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.508 07:59:51 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.508 07:59:51 -- target/referrals.sh@48 -- # jq length 00:08:40.508 07:59:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.508 07:59:51 -- common/autotest_common.sh@10 -- # set +x 00:08:40.508 07:59:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.509 07:59:51 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:40.509 07:59:51 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:40.509 07:59:51 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:40.509 07:59:51 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.509 07:59:51 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:40.509 07:59:51 -- target/referrals.sh@21 -- # sort 00:08:40.509 07:59:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.509 07:59:51 -- common/autotest_common.sh@10 -- # set +x 00:08:40.509 07:59:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.509 07:59:51 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:40.509 07:59:51 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:40.509 07:59:51 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:40.509 07:59:51 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.509 07:59:51 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.509 07:59:51 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.509 07:59:51 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.509 07:59:51 -- target/referrals.sh@26 -- # sort 00:08:40.780 07:59:51 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:40.780 07:59:51 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:40.780 07:59:51 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:40.780 07:59:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.780 07:59:51 -- common/autotest_common.sh@10 -- # set +x 00:08:40.780 07:59:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.780 07:59:51 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:40.780 07:59:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.780 07:59:51 -- common/autotest_common.sh@10 -- # set +x 00:08:40.780 07:59:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.780 07:59:51 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:40.780 07:59:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.780 07:59:51 -- common/autotest_common.sh@10 -- # set +x 00:08:40.780 07:59:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.780 07:59:51 -- target/referrals.sh@56 -- # jq length 00:08:40.780 07:59:51 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:40.780 07:59:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.780 07:59:51 -- common/autotest_common.sh@10 -- # set +x 00:08:40.780 07:59:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.780 07:59:51 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:40.780 07:59:51 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:40.780 07:59:51 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:40.780 07:59:51 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:40.780 07:59:51 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:40.780 07:59:51 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:40.780 07:59:51 -- target/referrals.sh@26 -- # sort 00:08:41.050 07:59:52 -- target/referrals.sh@26 -- # echo 00:08:41.050 07:59:52 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:41.050 07:59:52 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:41.050 07:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.050 07:59:52 -- common/autotest_common.sh@10 -- # set +x 00:08:41.050 07:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.050 07:59:52 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:41.050 07:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.050 07:59:52 -- common/autotest_common.sh@10 -- # set +x 00:08:41.050 07:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.050 07:59:52 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:41.050 07:59:52 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:41.050 07:59:52 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:41.050 07:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.051 07:59:52 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:41.051 07:59:52 -- common/autotest_common.sh@10 -- # set +x 00:08:41.051 07:59:52 -- target/referrals.sh@21 -- # sort 00:08:41.051 07:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.051 07:59:52 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:41.051 07:59:52 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:41.051 07:59:52 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:41.051 07:59:52 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:41.051 07:59:52 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:41.051 07:59:52 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.051 07:59:52 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:41.051 07:59:52 -- target/referrals.sh@26 -- # sort 00:08:41.051 07:59:52 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:41.051 07:59:52 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:41.051 07:59:52 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:41.051 07:59:52 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:41.051 07:59:52 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:41.051 07:59:52 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.051 07:59:52 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:41.308 07:59:52 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:41.308 07:59:52 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:41.308 07:59:52 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:41.308 07:59:52 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:41.308 07:59:52 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.308 07:59:52 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:41.308 07:59:52 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:41.308 07:59:52 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:41.308 07:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.308 07:59:52 -- common/autotest_common.sh@10 -- # set +x 00:08:41.309 07:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.309 07:59:52 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:41.309 07:59:52 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:41.309 07:59:52 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:41.309 07:59:52 -- target/referrals.sh@21 -- # sort 00:08:41.309 07:59:52 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:41.309 07:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.309 07:59:52 -- common/autotest_common.sh@10 -- # set +x 00:08:41.309 07:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.309 07:59:52 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:41.309 07:59:52 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:41.309 07:59:52 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:41.309 07:59:52 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:41.309 07:59:52 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:41.309 07:59:52 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.309 07:59:52 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:41.309 07:59:52 -- target/referrals.sh@26 -- # sort 00:08:41.566 07:59:52 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:41.566 07:59:52 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:41.566 07:59:52 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:41.566 07:59:52 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:41.566 07:59:52 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:41.566 07:59:52 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.566 07:59:52 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:41.566 07:59:52 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:41.566 07:59:52 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:41.566 07:59:52 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:41.566 07:59:52 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:41.566 07:59:52 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:41.566 07:59:52 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.824 07:59:52 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:41.824 07:59:52 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:41.824 07:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.824 07:59:52 -- common/autotest_common.sh@10 -- # set +x 00:08:41.824 07:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.824 07:59:52 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:41.824 07:59:52 -- target/referrals.sh@82 -- # jq length 00:08:41.824 07:59:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.824 07:59:52 -- common/autotest_common.sh@10 -- # set +x 00:08:41.824 07:59:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.824 07:59:52 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:41.824 07:59:52 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:41.824 07:59:52 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:41.824 07:59:52 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:41.824 07:59:52 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:41.824 07:59:52 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:41.824 07:59:52 -- target/referrals.sh@26 -- # sort 00:08:42.083 07:59:53 -- target/referrals.sh@26 -- # echo 00:08:42.083 07:59:53 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:42.083 07:59:53 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:42.083 07:59:53 -- target/referrals.sh@86 -- # nvmftestfini 00:08:42.083 07:59:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:42.083 07:59:53 -- nvmf/common.sh@116 -- # sync 00:08:42.083 07:59:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:42.083 07:59:53 -- nvmf/common.sh@119 -- # set +e 00:08:42.083 07:59:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:42.083 07:59:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:42.083 rmmod nvme_tcp 00:08:42.083 rmmod nvme_fabrics 00:08:42.083 rmmod nvme_keyring 00:08:42.083 07:59:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:42.083 07:59:53 -- nvmf/common.sh@123 -- # set -e 00:08:42.083 07:59:53 -- nvmf/common.sh@124 -- # return 0 00:08:42.083 07:59:53 -- nvmf/common.sh@477 -- # '[' -n 73688 ']' 00:08:42.083 07:59:53 -- nvmf/common.sh@478 -- # killprocess 73688 00:08:42.083 07:59:53 -- common/autotest_common.sh@936 -- # '[' -z 73688 ']' 00:08:42.083 07:59:53 -- common/autotest_common.sh@940 -- # kill -0 73688 00:08:42.083 07:59:53 -- common/autotest_common.sh@941 -- # uname 00:08:42.083 07:59:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:42.083 07:59:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73688 00:08:42.083 07:59:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:42.083 07:59:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:42.083 killing process with pid 73688 00:08:42.083 07:59:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73688' 00:08:42.083 07:59:53 -- common/autotest_common.sh@955 -- # kill 73688 00:08:42.083 07:59:53 -- common/autotest_common.sh@960 -- # wait 73688 00:08:42.341 07:59:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:42.341 07:59:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:42.341 07:59:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:42.341 07:59:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:42.341 07:59:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:42.341 07:59:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.341 07:59:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.341 07:59:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.341 07:59:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:42.341 00:08:42.341 real 0m3.610s 00:08:42.341 user 0m11.993s 00:08:42.341 sys 0m0.910s 00:08:42.341 07:59:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:42.341 07:59:53 -- common/autotest_common.sh@10 -- # set +x 00:08:42.341 ************************************ 00:08:42.341 END TEST nvmf_referrals 00:08:42.341 ************************************ 00:08:42.341 07:59:53 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:42.341 07:59:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:42.342 07:59:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.342 07:59:53 -- common/autotest_common.sh@10 -- # set +x 00:08:42.342 ************************************ 00:08:42.342 START TEST nvmf_connect_disconnect 00:08:42.342 ************************************ 00:08:42.342 07:59:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:42.342 * Looking for test storage... 00:08:42.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:42.600 07:59:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:42.600 07:59:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:42.600 07:59:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:42.600 07:59:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:42.600 07:59:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:42.600 07:59:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:42.600 07:59:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:42.600 07:59:53 -- scripts/common.sh@335 -- # IFS=.-: 00:08:42.600 07:59:53 -- scripts/common.sh@335 -- # read -ra ver1 00:08:42.600 07:59:53 -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.600 07:59:53 -- scripts/common.sh@336 -- # read -ra ver2 00:08:42.600 07:59:53 -- scripts/common.sh@337 -- # local 'op=<' 00:08:42.600 07:59:53 -- scripts/common.sh@339 -- # ver1_l=2 00:08:42.600 07:59:53 -- scripts/common.sh@340 -- # ver2_l=1 00:08:42.600 07:59:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:42.600 07:59:53 -- scripts/common.sh@343 -- # case "$op" in 00:08:42.600 07:59:53 -- scripts/common.sh@344 -- # : 1 00:08:42.600 07:59:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:42.600 07:59:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.600 07:59:53 -- scripts/common.sh@364 -- # decimal 1 00:08:42.600 07:59:53 -- scripts/common.sh@352 -- # local d=1 00:08:42.600 07:59:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.600 07:59:53 -- scripts/common.sh@354 -- # echo 1 00:08:42.600 07:59:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:42.600 07:59:53 -- scripts/common.sh@365 -- # decimal 2 00:08:42.600 07:59:53 -- scripts/common.sh@352 -- # local d=2 00:08:42.600 07:59:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.600 07:59:53 -- scripts/common.sh@354 -- # echo 2 00:08:42.600 07:59:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:42.600 07:59:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:42.600 07:59:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:42.600 07:59:53 -- scripts/common.sh@367 -- # return 0 00:08:42.600 07:59:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.600 07:59:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:42.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.600 --rc genhtml_branch_coverage=1 00:08:42.600 --rc genhtml_function_coverage=1 00:08:42.600 --rc genhtml_legend=1 00:08:42.600 --rc geninfo_all_blocks=1 00:08:42.600 --rc geninfo_unexecuted_blocks=1 00:08:42.600 00:08:42.600 ' 00:08:42.600 07:59:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:42.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.600 --rc genhtml_branch_coverage=1 00:08:42.600 --rc genhtml_function_coverage=1 00:08:42.600 --rc genhtml_legend=1 00:08:42.601 --rc geninfo_all_blocks=1 00:08:42.601 --rc geninfo_unexecuted_blocks=1 00:08:42.601 00:08:42.601 ' 00:08:42.601 07:59:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:42.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.601 --rc genhtml_branch_coverage=1 00:08:42.601 --rc genhtml_function_coverage=1 00:08:42.601 --rc genhtml_legend=1 00:08:42.601 --rc geninfo_all_blocks=1 00:08:42.601 --rc geninfo_unexecuted_blocks=1 00:08:42.601 00:08:42.601 ' 00:08:42.601 07:59:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:42.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.601 --rc genhtml_branch_coverage=1 00:08:42.601 --rc genhtml_function_coverage=1 00:08:42.601 --rc genhtml_legend=1 00:08:42.601 --rc geninfo_all_blocks=1 00:08:42.601 --rc geninfo_unexecuted_blocks=1 00:08:42.601 00:08:42.601 ' 00:08:42.601 07:59:53 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:42.601 07:59:53 -- nvmf/common.sh@7 -- # uname -s 00:08:42.601 07:59:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.601 07:59:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.601 07:59:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.601 07:59:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.601 07:59:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.601 07:59:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.601 07:59:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.601 07:59:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.601 07:59:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.601 07:59:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.601 07:59:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:08:42.601 07:59:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:08:42.601 07:59:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.601 07:59:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.601 07:59:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:42.601 07:59:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:42.601 07:59:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.601 07:59:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.601 07:59:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.601 07:59:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.601 07:59:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.601 07:59:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.601 07:59:53 -- paths/export.sh@5 -- # export PATH 00:08:42.601 07:59:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.601 07:59:53 -- nvmf/common.sh@46 -- # : 0 00:08:42.601 07:59:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:42.601 07:59:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:42.601 07:59:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:42.601 07:59:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.601 07:59:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.601 07:59:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:42.601 07:59:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:42.601 07:59:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:42.601 07:59:53 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:42.601 07:59:53 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:42.601 07:59:53 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:42.601 07:59:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:42.601 07:59:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.601 07:59:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:42.601 07:59:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:42.601 07:59:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:42.601 07:59:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.601 07:59:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.601 07:59:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.601 07:59:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:42.601 07:59:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:42.601 07:59:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:42.601 07:59:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:42.601 07:59:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:42.601 07:59:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:42.601 07:59:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.601 07:59:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.601 07:59:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:42.601 07:59:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:42.601 07:59:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:42.601 07:59:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:42.601 07:59:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:42.601 07:59:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.601 07:59:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:42.601 07:59:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:42.601 07:59:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:42.601 07:59:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:42.601 07:59:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:42.601 07:59:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:42.601 Cannot find device "nvmf_tgt_br" 00:08:42.601 07:59:53 -- nvmf/common.sh@154 -- # true 00:08:42.601 07:59:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:42.601 Cannot find device "nvmf_tgt_br2" 00:08:42.601 07:59:53 -- nvmf/common.sh@155 -- # true 00:08:42.601 07:59:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:42.601 07:59:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:42.601 Cannot find device "nvmf_tgt_br" 00:08:42.601 07:59:53 -- nvmf/common.sh@157 -- # true 00:08:42.601 07:59:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:42.601 Cannot find device "nvmf_tgt_br2" 00:08:42.601 07:59:53 -- nvmf/common.sh@158 -- # true 00:08:42.601 07:59:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:42.859 07:59:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:42.859 07:59:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:42.859 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:42.859 07:59:53 -- nvmf/common.sh@161 -- # true 00:08:42.859 07:59:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:42.859 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:42.859 07:59:53 -- nvmf/common.sh@162 -- # true 00:08:42.859 07:59:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:42.859 07:59:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:42.859 07:59:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:42.859 07:59:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:42.859 07:59:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:42.859 07:59:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:42.859 07:59:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:42.859 07:59:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:42.859 07:59:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:42.859 07:59:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:42.859 07:59:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:42.859 07:59:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:42.859 07:59:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:42.859 07:59:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:42.859 07:59:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:42.859 07:59:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:42.859 07:59:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:42.859 07:59:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:42.859 07:59:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:42.859 07:59:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:42.859 07:59:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:42.859 07:59:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:42.860 07:59:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:42.860 07:59:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:42.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:08:42.860 00:08:42.860 --- 10.0.0.2 ping statistics --- 00:08:42.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.860 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:08:42.860 07:59:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:42.860 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:42.860 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:08:42.860 00:08:42.860 --- 10.0.0.3 ping statistics --- 00:08:42.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.860 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:42.860 07:59:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:42.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:42.860 00:08:42.860 --- 10.0.0.1 ping statistics --- 00:08:42.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.860 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:42.860 07:59:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.860 07:59:54 -- nvmf/common.sh@421 -- # return 0 00:08:42.860 07:59:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:42.860 07:59:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.860 07:59:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:42.860 07:59:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:42.860 07:59:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.860 07:59:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:42.860 07:59:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:43.118 07:59:54 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:43.118 07:59:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:43.118 07:59:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:43.118 07:59:54 -- common/autotest_common.sh@10 -- # set +x 00:08:43.118 07:59:54 -- nvmf/common.sh@469 -- # nvmfpid=74003 00:08:43.118 07:59:54 -- nvmf/common.sh@470 -- # waitforlisten 74003 00:08:43.118 07:59:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:43.118 07:59:54 -- common/autotest_common.sh@829 -- # '[' -z 74003 ']' 00:08:43.118 07:59:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.118 07:59:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.118 07:59:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.118 07:59:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.118 07:59:54 -- common/autotest_common.sh@10 -- # set +x 00:08:43.118 [2024-12-07 07:59:54.200801] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:43.118 [2024-12-07 07:59:54.200906] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.118 [2024-12-07 07:59:54.333680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.376 [2024-12-07 07:59:54.418514] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:43.376 [2024-12-07 07:59:54.418676] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.376 [2024-12-07 07:59:54.418689] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.376 [2024-12-07 07:59:54.418697] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.376 [2024-12-07 07:59:54.418861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.376 [2024-12-07 07:59:54.420120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.376 [2024-12-07 07:59:54.420246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.376 [2024-12-07 07:59:54.420455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.940 07:59:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:43.940 07:59:55 -- common/autotest_common.sh@862 -- # return 0 00:08:43.940 07:59:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:43.940 07:59:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:43.940 07:59:55 -- common/autotest_common.sh@10 -- # set +x 00:08:44.197 07:59:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.197 07:59:55 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:44.197 07:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.197 07:59:55 -- common/autotest_common.sh@10 -- # set +x 00:08:44.197 [2024-12-07 07:59:55.242475] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.197 07:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.197 07:59:55 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:44.197 07:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.197 07:59:55 -- common/autotest_common.sh@10 -- # set +x 00:08:44.197 07:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.197 07:59:55 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:44.197 07:59:55 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:44.197 07:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.197 07:59:55 -- common/autotest_common.sh@10 -- # set +x 00:08:44.197 07:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.197 07:59:55 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:44.197 07:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.197 07:59:55 -- common/autotest_common.sh@10 -- # set +x 00:08:44.197 07:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.197 07:59:55 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.197 07:59:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.197 07:59:55 -- common/autotest_common.sh@10 -- # set +x 00:08:44.197 [2024-12-07 07:59:55.324486] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.197 07:59:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.197 07:59:55 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:44.197 07:59:55 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:44.197 07:59:55 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:44.198 07:59:55 -- target/connect_disconnect.sh@34 -- # set +x 00:08:46.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.261 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.651 08:03:40 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:29.651 08:03:40 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:29.651 08:03:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:29.651 08:03:40 -- nvmf/common.sh@116 -- # sync 00:12:29.651 08:03:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:29.651 08:03:40 -- nvmf/common.sh@119 -- # set +e 00:12:29.651 08:03:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:29.651 08:03:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:29.651 rmmod nvme_tcp 00:12:29.651 rmmod nvme_fabrics 00:12:29.651 rmmod nvme_keyring 00:12:29.651 08:03:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:29.651 08:03:40 -- nvmf/common.sh@123 -- # set -e 00:12:29.651 08:03:40 -- nvmf/common.sh@124 -- # return 0 00:12:29.651 08:03:40 -- nvmf/common.sh@477 -- # '[' -n 74003 ']' 00:12:29.651 08:03:40 -- nvmf/common.sh@478 -- # killprocess 74003 00:12:29.651 08:03:40 -- common/autotest_common.sh@936 -- # '[' -z 74003 ']' 00:12:29.651 08:03:40 -- common/autotest_common.sh@940 -- # kill -0 74003 00:12:29.651 08:03:40 -- common/autotest_common.sh@941 -- # uname 00:12:29.651 08:03:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:29.651 08:03:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74003 00:12:29.651 killing process with pid 74003 00:12:29.651 08:03:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:29.651 08:03:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:29.651 08:03:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74003' 00:12:29.651 08:03:40 -- common/autotest_common.sh@955 -- # kill 74003 00:12:29.651 08:03:40 -- common/autotest_common.sh@960 -- # wait 74003 00:12:29.909 08:03:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:29.909 08:03:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:29.909 08:03:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:29.909 08:03:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:29.909 08:03:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:29.909 08:03:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.909 08:03:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.909 08:03:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.909 08:03:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:29.909 ************************************ 00:12:29.909 END TEST nvmf_connect_disconnect 00:12:29.909 ************************************ 00:12:29.909 00:12:29.909 real 3m47.480s 00:12:29.909 user 14m47.555s 00:12:29.909 sys 0m21.043s 00:12:29.909 08:03:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:29.909 08:03:41 -- common/autotest_common.sh@10 -- # set +x 00:12:29.909 08:03:41 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:29.909 08:03:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:29.909 08:03:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:29.909 08:03:41 -- common/autotest_common.sh@10 -- # set +x 00:12:29.909 ************************************ 00:12:29.909 START TEST nvmf_multitarget 00:12:29.909 ************************************ 00:12:29.909 08:03:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:29.909 * Looking for test storage... 00:12:29.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:29.909 08:03:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:29.909 08:03:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:29.909 08:03:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:30.169 08:03:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:30.169 08:03:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:30.169 08:03:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:30.169 08:03:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:30.169 08:03:41 -- scripts/common.sh@335 -- # IFS=.-: 00:12:30.169 08:03:41 -- scripts/common.sh@335 -- # read -ra ver1 00:12:30.169 08:03:41 -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.169 08:03:41 -- scripts/common.sh@336 -- # read -ra ver2 00:12:30.169 08:03:41 -- scripts/common.sh@337 -- # local 'op=<' 00:12:30.169 08:03:41 -- scripts/common.sh@339 -- # ver1_l=2 00:12:30.169 08:03:41 -- scripts/common.sh@340 -- # ver2_l=1 00:12:30.169 08:03:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:30.169 08:03:41 -- scripts/common.sh@343 -- # case "$op" in 00:12:30.169 08:03:41 -- scripts/common.sh@344 -- # : 1 00:12:30.169 08:03:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:30.169 08:03:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.169 08:03:41 -- scripts/common.sh@364 -- # decimal 1 00:12:30.169 08:03:41 -- scripts/common.sh@352 -- # local d=1 00:12:30.169 08:03:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.169 08:03:41 -- scripts/common.sh@354 -- # echo 1 00:12:30.169 08:03:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:30.169 08:03:41 -- scripts/common.sh@365 -- # decimal 2 00:12:30.169 08:03:41 -- scripts/common.sh@352 -- # local d=2 00:12:30.169 08:03:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.169 08:03:41 -- scripts/common.sh@354 -- # echo 2 00:12:30.169 08:03:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:30.169 08:03:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:30.169 08:03:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:30.169 08:03:41 -- scripts/common.sh@367 -- # return 0 00:12:30.169 08:03:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.169 08:03:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:30.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.169 --rc genhtml_branch_coverage=1 00:12:30.169 --rc genhtml_function_coverage=1 00:12:30.169 --rc genhtml_legend=1 00:12:30.169 --rc geninfo_all_blocks=1 00:12:30.169 --rc geninfo_unexecuted_blocks=1 00:12:30.169 00:12:30.169 ' 00:12:30.169 08:03:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:30.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.169 --rc genhtml_branch_coverage=1 00:12:30.169 --rc genhtml_function_coverage=1 00:12:30.169 --rc genhtml_legend=1 00:12:30.169 --rc geninfo_all_blocks=1 00:12:30.169 --rc geninfo_unexecuted_blocks=1 00:12:30.169 00:12:30.169 ' 00:12:30.169 08:03:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:30.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.169 --rc genhtml_branch_coverage=1 00:12:30.169 --rc genhtml_function_coverage=1 00:12:30.169 --rc genhtml_legend=1 00:12:30.169 --rc geninfo_all_blocks=1 00:12:30.169 --rc geninfo_unexecuted_blocks=1 00:12:30.169 00:12:30.169 ' 00:12:30.169 08:03:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:30.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.169 --rc genhtml_branch_coverage=1 00:12:30.169 --rc genhtml_function_coverage=1 00:12:30.169 --rc genhtml_legend=1 00:12:30.169 --rc geninfo_all_blocks=1 00:12:30.169 --rc geninfo_unexecuted_blocks=1 00:12:30.169 00:12:30.169 ' 00:12:30.169 08:03:41 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:30.169 08:03:41 -- nvmf/common.sh@7 -- # uname -s 00:12:30.169 08:03:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.169 08:03:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.169 08:03:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.169 08:03:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.169 08:03:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.169 08:03:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.169 08:03:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.169 08:03:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.169 08:03:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.169 08:03:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.169 08:03:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:12:30.169 08:03:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:12:30.169 08:03:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.169 08:03:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.169 08:03:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:30.169 08:03:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:30.169 08:03:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.169 08:03:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.169 08:03:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.169 08:03:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.170 08:03:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.170 08:03:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.170 08:03:41 -- paths/export.sh@5 -- # export PATH 00:12:30.170 08:03:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.170 08:03:41 -- nvmf/common.sh@46 -- # : 0 00:12:30.170 08:03:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:30.170 08:03:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:30.170 08:03:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:30.170 08:03:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.170 08:03:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.170 08:03:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:30.170 08:03:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:30.170 08:03:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:30.170 08:03:41 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:30.170 08:03:41 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:30.170 08:03:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:30.170 08:03:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.170 08:03:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:30.170 08:03:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:30.170 08:03:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:30.170 08:03:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.170 08:03:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.170 08:03:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.170 08:03:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:30.170 08:03:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:30.170 08:03:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:30.170 08:03:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:30.170 08:03:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:30.170 08:03:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:30.170 08:03:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.170 08:03:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.170 08:03:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:30.170 08:03:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:30.170 08:03:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:30.170 08:03:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:30.170 08:03:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:30.170 08:03:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.170 08:03:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:30.170 08:03:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:30.170 08:03:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:30.170 08:03:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:30.170 08:03:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:30.170 08:03:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:30.170 Cannot find device "nvmf_tgt_br" 00:12:30.170 08:03:41 -- nvmf/common.sh@154 -- # true 00:12:30.170 08:03:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:30.170 Cannot find device "nvmf_tgt_br2" 00:12:30.170 08:03:41 -- nvmf/common.sh@155 -- # true 00:12:30.170 08:03:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:30.170 08:03:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:30.170 Cannot find device "nvmf_tgt_br" 00:12:30.170 08:03:41 -- nvmf/common.sh@157 -- # true 00:12:30.170 08:03:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:30.170 Cannot find device "nvmf_tgt_br2" 00:12:30.170 08:03:41 -- nvmf/common.sh@158 -- # true 00:12:30.170 08:03:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:30.170 08:03:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:30.170 08:03:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:30.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:30.170 08:03:41 -- nvmf/common.sh@161 -- # true 00:12:30.170 08:03:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:30.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:30.170 08:03:41 -- nvmf/common.sh@162 -- # true 00:12:30.170 08:03:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:30.170 08:03:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:30.170 08:03:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:30.170 08:03:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:30.170 08:03:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:30.429 08:03:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:30.429 08:03:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:30.429 08:03:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:30.429 08:03:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:30.429 08:03:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:30.429 08:03:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:30.429 08:03:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:30.429 08:03:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:30.429 08:03:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:30.429 08:03:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:30.429 08:03:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:30.429 08:03:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:30.429 08:03:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:30.429 08:03:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:30.429 08:03:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:30.429 08:03:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:30.429 08:03:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:30.429 08:03:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:30.429 08:03:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:30.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:12:30.429 00:12:30.429 --- 10.0.0.2 ping statistics --- 00:12:30.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.429 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:12:30.429 08:03:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:30.429 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:30.429 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:12:30.429 00:12:30.429 --- 10.0.0.3 ping statistics --- 00:12:30.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.429 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:12:30.429 08:03:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:30.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:30.429 00:12:30.429 --- 10.0.0.1 ping statistics --- 00:12:30.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.429 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:30.429 08:03:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.429 08:03:41 -- nvmf/common.sh@421 -- # return 0 00:12:30.429 08:03:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:30.429 08:03:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.429 08:03:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:30.429 08:03:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:30.429 08:03:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.429 08:03:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:30.429 08:03:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:30.429 08:03:41 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:30.429 08:03:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:30.429 08:03:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:30.429 08:03:41 -- common/autotest_common.sh@10 -- # set +x 00:12:30.429 08:03:41 -- nvmf/common.sh@469 -- # nvmfpid=77811 00:12:30.429 08:03:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:30.429 08:03:41 -- nvmf/common.sh@470 -- # waitforlisten 77811 00:12:30.429 08:03:41 -- common/autotest_common.sh@829 -- # '[' -z 77811 ']' 00:12:30.429 08:03:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.429 08:03:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:30.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.429 08:03:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.429 08:03:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:30.429 08:03:41 -- common/autotest_common.sh@10 -- # set +x 00:12:30.429 [2024-12-07 08:03:41.670421] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:30.429 [2024-12-07 08:03:41.670511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.688 [2024-12-07 08:03:41.810764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:30.688 [2024-12-07 08:03:41.873550] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:30.688 [2024-12-07 08:03:41.873950] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.688 [2024-12-07 08:03:41.874001] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.688 [2024-12-07 08:03:41.874169] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.688 [2024-12-07 08:03:41.874347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.688 [2024-12-07 08:03:41.874533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.688 [2024-12-07 08:03:41.874765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.688 [2024-12-07 08:03:41.874772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.626 08:03:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:31.626 08:03:42 -- common/autotest_common.sh@862 -- # return 0 00:12:31.626 08:03:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:31.626 08:03:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:31.626 08:03:42 -- common/autotest_common.sh@10 -- # set +x 00:12:31.626 08:03:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.626 08:03:42 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:31.626 08:03:42 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:31.626 08:03:42 -- target/multitarget.sh@21 -- # jq length 00:12:31.626 08:03:42 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:31.885 08:03:42 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:31.885 "nvmf_tgt_1" 00:12:31.885 08:03:43 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:31.885 "nvmf_tgt_2" 00:12:32.144 08:03:43 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:32.144 08:03:43 -- target/multitarget.sh@28 -- # jq length 00:12:32.144 08:03:43 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:32.144 08:03:43 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:32.144 true 00:12:32.403 08:03:43 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:32.403 true 00:12:32.403 08:03:43 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:32.403 08:03:43 -- target/multitarget.sh@35 -- # jq length 00:12:32.403 08:03:43 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:32.403 08:03:43 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:32.403 08:03:43 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:32.403 08:03:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:32.403 08:03:43 -- nvmf/common.sh@116 -- # sync 00:12:32.662 08:03:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:32.662 08:03:43 -- nvmf/common.sh@119 -- # set +e 00:12:32.662 08:03:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:32.662 08:03:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:32.662 rmmod nvme_tcp 00:12:32.662 rmmod nvme_fabrics 00:12:32.662 rmmod nvme_keyring 00:12:32.662 08:03:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:32.662 08:03:43 -- nvmf/common.sh@123 -- # set -e 00:12:32.662 08:03:43 -- nvmf/common.sh@124 -- # return 0 00:12:32.662 08:03:43 -- nvmf/common.sh@477 -- # '[' -n 77811 ']' 00:12:32.662 08:03:43 -- nvmf/common.sh@478 -- # killprocess 77811 00:12:32.662 08:03:43 -- common/autotest_common.sh@936 -- # '[' -z 77811 ']' 00:12:32.662 08:03:43 -- common/autotest_common.sh@940 -- # kill -0 77811 00:12:32.662 08:03:43 -- common/autotest_common.sh@941 -- # uname 00:12:32.662 08:03:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:32.662 08:03:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77811 00:12:32.662 killing process with pid 77811 00:12:32.662 08:03:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:32.662 08:03:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:32.662 08:03:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77811' 00:12:32.662 08:03:43 -- common/autotest_common.sh@955 -- # kill 77811 00:12:32.662 08:03:43 -- common/autotest_common.sh@960 -- # wait 77811 00:12:32.922 08:03:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:32.922 08:03:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:32.922 08:03:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:32.922 08:03:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:32.922 08:03:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:32.922 08:03:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.922 08:03:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.922 08:03:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.922 08:03:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:32.922 ************************************ 00:12:32.922 END TEST nvmf_multitarget 00:12:32.922 ************************************ 00:12:32.922 00:12:32.922 real 0m2.957s 00:12:32.922 user 0m9.820s 00:12:32.922 sys 0m0.717s 00:12:32.922 08:03:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:32.922 08:03:44 -- common/autotest_common.sh@10 -- # set +x 00:12:32.922 08:03:44 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:32.922 08:03:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:32.922 08:03:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:32.922 08:03:44 -- common/autotest_common.sh@10 -- # set +x 00:12:32.922 ************************************ 00:12:32.922 START TEST nvmf_rpc 00:12:32.922 ************************************ 00:12:32.922 08:03:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:32.922 * Looking for test storage... 00:12:32.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:32.922 08:03:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:32.922 08:03:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:32.922 08:03:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:33.182 08:03:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:33.182 08:03:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:33.182 08:03:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:33.182 08:03:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:33.182 08:03:44 -- scripts/common.sh@335 -- # IFS=.-: 00:12:33.182 08:03:44 -- scripts/common.sh@335 -- # read -ra ver1 00:12:33.182 08:03:44 -- scripts/common.sh@336 -- # IFS=.-: 00:12:33.182 08:03:44 -- scripts/common.sh@336 -- # read -ra ver2 00:12:33.182 08:03:44 -- scripts/common.sh@337 -- # local 'op=<' 00:12:33.182 08:03:44 -- scripts/common.sh@339 -- # ver1_l=2 00:12:33.182 08:03:44 -- scripts/common.sh@340 -- # ver2_l=1 00:12:33.182 08:03:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:33.182 08:03:44 -- scripts/common.sh@343 -- # case "$op" in 00:12:33.182 08:03:44 -- scripts/common.sh@344 -- # : 1 00:12:33.182 08:03:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:33.182 08:03:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:33.182 08:03:44 -- scripts/common.sh@364 -- # decimal 1 00:12:33.182 08:03:44 -- scripts/common.sh@352 -- # local d=1 00:12:33.182 08:03:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:33.182 08:03:44 -- scripts/common.sh@354 -- # echo 1 00:12:33.182 08:03:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:33.182 08:03:44 -- scripts/common.sh@365 -- # decimal 2 00:12:33.182 08:03:44 -- scripts/common.sh@352 -- # local d=2 00:12:33.182 08:03:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:33.182 08:03:44 -- scripts/common.sh@354 -- # echo 2 00:12:33.182 08:03:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:33.182 08:03:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:33.182 08:03:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:33.182 08:03:44 -- scripts/common.sh@367 -- # return 0 00:12:33.182 08:03:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:33.182 08:03:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:33.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.182 --rc genhtml_branch_coverage=1 00:12:33.182 --rc genhtml_function_coverage=1 00:12:33.182 --rc genhtml_legend=1 00:12:33.182 --rc geninfo_all_blocks=1 00:12:33.182 --rc geninfo_unexecuted_blocks=1 00:12:33.182 00:12:33.182 ' 00:12:33.182 08:03:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:33.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.182 --rc genhtml_branch_coverage=1 00:12:33.182 --rc genhtml_function_coverage=1 00:12:33.182 --rc genhtml_legend=1 00:12:33.182 --rc geninfo_all_blocks=1 00:12:33.182 --rc geninfo_unexecuted_blocks=1 00:12:33.182 00:12:33.182 ' 00:12:33.182 08:03:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:33.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.182 --rc genhtml_branch_coverage=1 00:12:33.182 --rc genhtml_function_coverage=1 00:12:33.182 --rc genhtml_legend=1 00:12:33.182 --rc geninfo_all_blocks=1 00:12:33.182 --rc geninfo_unexecuted_blocks=1 00:12:33.182 00:12:33.182 ' 00:12:33.182 08:03:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:33.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:33.182 --rc genhtml_branch_coverage=1 00:12:33.182 --rc genhtml_function_coverage=1 00:12:33.182 --rc genhtml_legend=1 00:12:33.182 --rc geninfo_all_blocks=1 00:12:33.182 --rc geninfo_unexecuted_blocks=1 00:12:33.182 00:12:33.182 ' 00:12:33.182 08:03:44 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:33.182 08:03:44 -- nvmf/common.sh@7 -- # uname -s 00:12:33.182 08:03:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.182 08:03:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.182 08:03:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.182 08:03:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.182 08:03:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.182 08:03:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.182 08:03:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.182 08:03:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.182 08:03:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.182 08:03:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.182 08:03:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:12:33.182 08:03:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:12:33.182 08:03:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.182 08:03:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.182 08:03:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:33.183 08:03:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:33.183 08:03:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.183 08:03:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.183 08:03:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.183 08:03:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.183 08:03:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.183 08:03:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.183 08:03:44 -- paths/export.sh@5 -- # export PATH 00:12:33.183 08:03:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.183 08:03:44 -- nvmf/common.sh@46 -- # : 0 00:12:33.183 08:03:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:33.183 08:03:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:33.183 08:03:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:33.183 08:03:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.183 08:03:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.183 08:03:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:33.183 08:03:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:33.183 08:03:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:33.183 08:03:44 -- target/rpc.sh@11 -- # loops=5 00:12:33.183 08:03:44 -- target/rpc.sh@23 -- # nvmftestinit 00:12:33.183 08:03:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:33.183 08:03:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.183 08:03:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:33.183 08:03:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:33.183 08:03:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:33.183 08:03:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.183 08:03:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.183 08:03:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.183 08:03:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:33.183 08:03:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:33.183 08:03:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:33.183 08:03:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:33.183 08:03:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:33.183 08:03:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:33.183 08:03:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.183 08:03:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.183 08:03:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:33.183 08:03:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:33.183 08:03:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:33.183 08:03:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:33.183 08:03:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:33.183 08:03:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.183 08:03:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:33.183 08:03:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:33.183 08:03:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:33.183 08:03:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:33.183 08:03:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:33.183 08:03:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:33.183 Cannot find device "nvmf_tgt_br" 00:12:33.183 08:03:44 -- nvmf/common.sh@154 -- # true 00:12:33.183 08:03:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:33.183 Cannot find device "nvmf_tgt_br2" 00:12:33.183 08:03:44 -- nvmf/common.sh@155 -- # true 00:12:33.183 08:03:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:33.183 08:03:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:33.183 Cannot find device "nvmf_tgt_br" 00:12:33.183 08:03:44 -- nvmf/common.sh@157 -- # true 00:12:33.183 08:03:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:33.183 Cannot find device "nvmf_tgt_br2" 00:12:33.183 08:03:44 -- nvmf/common.sh@158 -- # true 00:12:33.183 08:03:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:33.183 08:03:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:33.183 08:03:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:33.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:33.183 08:03:44 -- nvmf/common.sh@161 -- # true 00:12:33.183 08:03:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:33.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:33.183 08:03:44 -- nvmf/common.sh@162 -- # true 00:12:33.183 08:03:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:33.183 08:03:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:33.183 08:03:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:33.183 08:03:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:33.183 08:03:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:33.183 08:03:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:33.442 08:03:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:33.442 08:03:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:33.442 08:03:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:33.442 08:03:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:33.442 08:03:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:33.442 08:03:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:33.442 08:03:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:33.442 08:03:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:33.442 08:03:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:33.442 08:03:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:33.442 08:03:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:33.442 08:03:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:33.442 08:03:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:33.442 08:03:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:33.442 08:03:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:33.442 08:03:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:33.442 08:03:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:33.442 08:03:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:33.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:12:33.442 00:12:33.442 --- 10.0.0.2 ping statistics --- 00:12:33.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.442 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:12:33.442 08:03:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:33.442 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:33.442 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:12:33.442 00:12:33.442 --- 10.0.0.3 ping statistics --- 00:12:33.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.442 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:12:33.442 08:03:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:33.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:33.442 00:12:33.442 --- 10.0.0.1 ping statistics --- 00:12:33.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.442 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:33.442 08:03:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.442 08:03:44 -- nvmf/common.sh@421 -- # return 0 00:12:33.442 08:03:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:33.442 08:03:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.442 08:03:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:33.442 08:03:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:33.442 08:03:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.442 08:03:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:33.442 08:03:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:33.442 08:03:44 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:33.442 08:03:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:33.442 08:03:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:33.442 08:03:44 -- common/autotest_common.sh@10 -- # set +x 00:12:33.442 08:03:44 -- nvmf/common.sh@469 -- # nvmfpid=78050 00:12:33.442 08:03:44 -- nvmf/common.sh@470 -- # waitforlisten 78050 00:12:33.442 08:03:44 -- common/autotest_common.sh@829 -- # '[' -z 78050 ']' 00:12:33.442 08:03:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:33.442 08:03:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.442 08:03:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:33.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.442 08:03:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.442 08:03:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:33.442 08:03:44 -- common/autotest_common.sh@10 -- # set +x 00:12:33.442 [2024-12-07 08:03:44.671075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:33.442 [2024-12-07 08:03:44.671167] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.701 [2024-12-07 08:03:44.815376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.701 [2024-12-07 08:03:44.890731] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:33.701 [2024-12-07 08:03:44.891170] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.701 [2024-12-07 08:03:44.891218] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.701 [2024-12-07 08:03:44.891235] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.701 [2024-12-07 08:03:44.891317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.701 [2024-12-07 08:03:44.891380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.701 [2024-12-07 08:03:44.892345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.701 [2024-12-07 08:03:44.892361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.635 08:03:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:34.635 08:03:45 -- common/autotest_common.sh@862 -- # return 0 00:12:34.635 08:03:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:34.635 08:03:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:34.635 08:03:45 -- common/autotest_common.sh@10 -- # set +x 00:12:34.635 08:03:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.635 08:03:45 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:34.635 08:03:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.635 08:03:45 -- common/autotest_common.sh@10 -- # set +x 00:12:34.635 08:03:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.635 08:03:45 -- target/rpc.sh@26 -- # stats='{ 00:12:34.635 "poll_groups": [ 00:12:34.635 { 00:12:34.635 "admin_qpairs": 0, 00:12:34.635 "completed_nvme_io": 0, 00:12:34.635 "current_admin_qpairs": 0, 00:12:34.635 "current_io_qpairs": 0, 00:12:34.635 "io_qpairs": 0, 00:12:34.635 "name": "nvmf_tgt_poll_group_0", 00:12:34.635 "pending_bdev_io": 0, 00:12:34.635 "transports": [] 00:12:34.635 }, 00:12:34.635 { 00:12:34.635 "admin_qpairs": 0, 00:12:34.635 "completed_nvme_io": 0, 00:12:34.635 "current_admin_qpairs": 0, 00:12:34.635 "current_io_qpairs": 0, 00:12:34.635 "io_qpairs": 0, 00:12:34.635 "name": "nvmf_tgt_poll_group_1", 00:12:34.635 "pending_bdev_io": 0, 00:12:34.635 "transports": [] 00:12:34.635 }, 00:12:34.635 { 00:12:34.635 "admin_qpairs": 0, 00:12:34.635 "completed_nvme_io": 0, 00:12:34.635 "current_admin_qpairs": 0, 00:12:34.635 "current_io_qpairs": 0, 00:12:34.635 "io_qpairs": 0, 00:12:34.635 "name": "nvmf_tgt_poll_group_2", 00:12:34.635 "pending_bdev_io": 0, 00:12:34.635 "transports": [] 00:12:34.635 }, 00:12:34.635 { 00:12:34.635 "admin_qpairs": 0, 00:12:34.635 "completed_nvme_io": 0, 00:12:34.635 "current_admin_qpairs": 0, 00:12:34.635 "current_io_qpairs": 0, 00:12:34.635 "io_qpairs": 0, 00:12:34.635 "name": "nvmf_tgt_poll_group_3", 00:12:34.635 "pending_bdev_io": 0, 00:12:34.635 "transports": [] 00:12:34.635 } 00:12:34.635 ], 00:12:34.635 "tick_rate": 2200000000 00:12:34.635 }' 00:12:34.635 08:03:45 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:34.635 08:03:45 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:34.635 08:03:45 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:34.635 08:03:45 -- target/rpc.sh@15 -- # wc -l 00:12:34.635 08:03:45 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:34.635 08:03:45 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:34.635 08:03:45 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:34.635 08:03:45 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:34.635 08:03:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.635 08:03:45 -- common/autotest_common.sh@10 -- # set +x 00:12:34.635 [2024-12-07 08:03:45.839447] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.635 08:03:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.635 08:03:45 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:34.635 08:03:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.635 08:03:45 -- common/autotest_common.sh@10 -- # set +x 00:12:34.635 08:03:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.635 08:03:45 -- target/rpc.sh@33 -- # stats='{ 00:12:34.635 "poll_groups": [ 00:12:34.635 { 00:12:34.635 "admin_qpairs": 0, 00:12:34.635 "completed_nvme_io": 0, 00:12:34.635 "current_admin_qpairs": 0, 00:12:34.635 "current_io_qpairs": 0, 00:12:34.635 "io_qpairs": 0, 00:12:34.636 "name": "nvmf_tgt_poll_group_0", 00:12:34.636 "pending_bdev_io": 0, 00:12:34.636 "transports": [ 00:12:34.636 { 00:12:34.636 "trtype": "TCP" 00:12:34.636 } 00:12:34.636 ] 00:12:34.636 }, 00:12:34.636 { 00:12:34.636 "admin_qpairs": 0, 00:12:34.636 "completed_nvme_io": 0, 00:12:34.636 "current_admin_qpairs": 0, 00:12:34.636 "current_io_qpairs": 0, 00:12:34.636 "io_qpairs": 0, 00:12:34.636 "name": "nvmf_tgt_poll_group_1", 00:12:34.636 "pending_bdev_io": 0, 00:12:34.636 "transports": [ 00:12:34.636 { 00:12:34.636 "trtype": "TCP" 00:12:34.636 } 00:12:34.636 ] 00:12:34.636 }, 00:12:34.636 { 00:12:34.636 "admin_qpairs": 0, 00:12:34.636 "completed_nvme_io": 0, 00:12:34.636 "current_admin_qpairs": 0, 00:12:34.636 "current_io_qpairs": 0, 00:12:34.636 "io_qpairs": 0, 00:12:34.636 "name": "nvmf_tgt_poll_group_2", 00:12:34.636 "pending_bdev_io": 0, 00:12:34.636 "transports": [ 00:12:34.636 { 00:12:34.636 "trtype": "TCP" 00:12:34.636 } 00:12:34.636 ] 00:12:34.636 }, 00:12:34.636 { 00:12:34.636 "admin_qpairs": 0, 00:12:34.636 "completed_nvme_io": 0, 00:12:34.636 "current_admin_qpairs": 0, 00:12:34.636 "current_io_qpairs": 0, 00:12:34.636 "io_qpairs": 0, 00:12:34.636 "name": "nvmf_tgt_poll_group_3", 00:12:34.636 "pending_bdev_io": 0, 00:12:34.636 "transports": [ 00:12:34.636 { 00:12:34.636 "trtype": "TCP" 00:12:34.636 } 00:12:34.636 ] 00:12:34.636 } 00:12:34.636 ], 00:12:34.636 "tick_rate": 2200000000 00:12:34.636 }' 00:12:34.636 08:03:45 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:34.636 08:03:45 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:34.636 08:03:45 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:34.636 08:03:45 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:34.894 08:03:45 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:34.894 08:03:45 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:34.894 08:03:45 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:34.894 08:03:45 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:34.894 08:03:45 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:34.894 08:03:45 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:34.894 08:03:45 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:34.894 08:03:45 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:34.894 08:03:45 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:34.894 08:03:45 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:34.894 08:03:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.894 08:03:45 -- common/autotest_common.sh@10 -- # set +x 00:12:34.894 Malloc1 00:12:34.894 08:03:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.894 08:03:46 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:34.894 08:03:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.894 08:03:46 -- common/autotest_common.sh@10 -- # set +x 00:12:34.894 08:03:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.894 08:03:46 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.894 08:03:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.894 08:03:46 -- common/autotest_common.sh@10 -- # set +x 00:12:34.894 08:03:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.894 08:03:46 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:34.895 08:03:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.895 08:03:46 -- common/autotest_common.sh@10 -- # set +x 00:12:34.895 08:03:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.895 08:03:46 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.895 08:03:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.895 08:03:46 -- common/autotest_common.sh@10 -- # set +x 00:12:34.895 [2024-12-07 08:03:46.058426] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.895 08:03:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.895 08:03:46 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec -a 10.0.0.2 -s 4420 00:12:34.895 08:03:46 -- common/autotest_common.sh@650 -- # local es=0 00:12:34.895 08:03:46 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec -a 10.0.0.2 -s 4420 00:12:34.895 08:03:46 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:34.895 08:03:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.895 08:03:46 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:34.895 08:03:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.895 08:03:46 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:34.895 08:03:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.895 08:03:46 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:34.895 08:03:46 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:34.895 08:03:46 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec -a 10.0.0.2 -s 4420 00:12:34.895 [2024-12-07 08:03:46.086762] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec' 00:12:34.895 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:34.895 could not add new controller: failed to write to nvme-fabrics device 00:12:34.895 08:03:46 -- common/autotest_common.sh@653 -- # es=1 00:12:34.895 08:03:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:34.895 08:03:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:34.895 08:03:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:34.895 08:03:46 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:12:34.895 08:03:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.895 08:03:46 -- common/autotest_common.sh@10 -- # set +x 00:12:34.895 08:03:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.895 08:03:46 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.153 08:03:46 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:35.153 08:03:46 -- common/autotest_common.sh@1187 -- # local i=0 00:12:35.153 08:03:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.153 08:03:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:35.153 08:03:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:37.054 08:03:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:37.054 08:03:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:37.054 08:03:48 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:37.054 08:03:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:37.054 08:03:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.054 08:03:48 -- common/autotest_common.sh@1197 -- # return 0 00:12:37.054 08:03:48 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.313 08:03:48 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.313 08:03:48 -- common/autotest_common.sh@1208 -- # local i=0 00:12:37.313 08:03:48 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:37.313 08:03:48 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.313 08:03:48 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:37.313 08:03:48 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.313 08:03:48 -- common/autotest_common.sh@1220 -- # return 0 00:12:37.313 08:03:48 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:12:37.313 08:03:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.313 08:03:48 -- common/autotest_common.sh@10 -- # set +x 00:12:37.313 08:03:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.313 08:03:48 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.313 08:03:48 -- common/autotest_common.sh@650 -- # local es=0 00:12:37.313 08:03:48 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.313 08:03:48 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:37.313 08:03:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.313 08:03:48 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:37.313 08:03:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.313 08:03:48 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:37.313 08:03:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.313 08:03:48 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:37.313 08:03:48 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:37.313 08:03:48 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.313 [2024-12-07 08:03:48.478642] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec' 00:12:37.313 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:37.313 could not add new controller: failed to write to nvme-fabrics device 00:12:37.313 08:03:48 -- common/autotest_common.sh@653 -- # es=1 00:12:37.313 08:03:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:37.313 08:03:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:37.313 08:03:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:37.313 08:03:48 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:37.313 08:03:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.313 08:03:48 -- common/autotest_common.sh@10 -- # set +x 00:12:37.313 08:03:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.313 08:03:48 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.572 08:03:48 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:37.572 08:03:48 -- common/autotest_common.sh@1187 -- # local i=0 00:12:37.572 08:03:48 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.572 08:03:48 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:37.572 08:03:48 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:39.503 08:03:50 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:39.503 08:03:50 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:39.503 08:03:50 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.503 08:03:50 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:39.503 08:03:50 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.503 08:03:50 -- common/autotest_common.sh@1197 -- # return 0 00:12:39.503 08:03:50 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.761 08:03:50 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:39.761 08:03:50 -- common/autotest_common.sh@1208 -- # local i=0 00:12:39.761 08:03:50 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:39.761 08:03:50 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.761 08:03:50 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:39.761 08:03:50 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:39.761 08:03:50 -- common/autotest_common.sh@1220 -- # return 0 00:12:39.761 08:03:50 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:39.761 08:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.761 08:03:50 -- common/autotest_common.sh@10 -- # set +x 00:12:39.761 08:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.761 08:03:50 -- target/rpc.sh@81 -- # seq 1 5 00:12:39.761 08:03:50 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:39.761 08:03:50 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:39.761 08:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.761 08:03:50 -- common/autotest_common.sh@10 -- # set +x 00:12:39.761 08:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.761 08:03:50 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.761 08:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.761 08:03:50 -- common/autotest_common.sh@10 -- # set +x 00:12:39.761 [2024-12-07 08:03:50.897875] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.761 08:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.761 08:03:50 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:39.761 08:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.761 08:03:50 -- common/autotest_common.sh@10 -- # set +x 00:12:39.761 08:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.761 08:03:50 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:39.761 08:03:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.761 08:03:50 -- common/autotest_common.sh@10 -- # set +x 00:12:39.761 08:03:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.761 08:03:50 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.020 08:03:51 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.020 08:03:51 -- common/autotest_common.sh@1187 -- # local i=0 00:12:40.020 08:03:51 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.020 08:03:51 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:40.020 08:03:51 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:41.920 08:03:53 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:41.920 08:03:53 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:41.920 08:03:53 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.920 08:03:53 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:41.920 08:03:53 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.920 08:03:53 -- common/autotest_common.sh@1197 -- # return 0 00:12:41.920 08:03:53 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.920 08:03:53 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.920 08:03:53 -- common/autotest_common.sh@1208 -- # local i=0 00:12:41.920 08:03:53 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:41.920 08:03:53 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.920 08:03:53 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:41.920 08:03:53 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.177 08:03:53 -- common/autotest_common.sh@1220 -- # return 0 00:12:42.177 08:03:53 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:42.177 08:03:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.177 08:03:53 -- common/autotest_common.sh@10 -- # set +x 00:12:42.177 08:03:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.177 08:03:53 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.177 08:03:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.177 08:03:53 -- common/autotest_common.sh@10 -- # set +x 00:12:42.177 08:03:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.177 08:03:53 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:42.177 08:03:53 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.177 08:03:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.177 08:03:53 -- common/autotest_common.sh@10 -- # set +x 00:12:42.177 08:03:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.177 08:03:53 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.177 08:03:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.177 08:03:53 -- common/autotest_common.sh@10 -- # set +x 00:12:42.177 [2024-12-07 08:03:53.231521] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.177 08:03:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.177 08:03:53 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:42.177 08:03:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.177 08:03:53 -- common/autotest_common.sh@10 -- # set +x 00:12:42.177 08:03:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.177 08:03:53 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.177 08:03:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.177 08:03:53 -- common/autotest_common.sh@10 -- # set +x 00:12:42.177 08:03:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.177 08:03:53 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:42.177 08:03:53 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:42.177 08:03:53 -- common/autotest_common.sh@1187 -- # local i=0 00:12:42.177 08:03:53 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.177 08:03:53 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:42.177 08:03:53 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:44.706 08:03:55 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:44.706 08:03:55 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:44.706 08:03:55 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.706 08:03:55 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:44.706 08:03:55 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.706 08:03:55 -- common/autotest_common.sh@1197 -- # return 0 00:12:44.706 08:03:55 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.706 08:03:55 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.706 08:03:55 -- common/autotest_common.sh@1208 -- # local i=0 00:12:44.706 08:03:55 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:44.706 08:03:55 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.706 08:03:55 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:44.706 08:03:55 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.706 08:03:55 -- common/autotest_common.sh@1220 -- # return 0 00:12:44.706 08:03:55 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:44.706 08:03:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.706 08:03:55 -- common/autotest_common.sh@10 -- # set +x 00:12:44.706 08:03:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.706 08:03:55 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.706 08:03:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.706 08:03:55 -- common/autotest_common.sh@10 -- # set +x 00:12:44.706 08:03:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.706 08:03:55 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:44.706 08:03:55 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.706 08:03:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.706 08:03:55 -- common/autotest_common.sh@10 -- # set +x 00:12:44.706 08:03:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.706 08:03:55 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.706 08:03:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.706 08:03:55 -- common/autotest_common.sh@10 -- # set +x 00:12:44.706 [2024-12-07 08:03:55.545728] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.706 08:03:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.706 08:03:55 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:44.706 08:03:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.706 08:03:55 -- common/autotest_common.sh@10 -- # set +x 00:12:44.706 08:03:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.706 08:03:55 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.706 08:03:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.706 08:03:55 -- common/autotest_common.sh@10 -- # set +x 00:12:44.706 08:03:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.706 08:03:55 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.706 08:03:55 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.706 08:03:55 -- common/autotest_common.sh@1187 -- # local i=0 00:12:44.706 08:03:55 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.706 08:03:55 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:44.706 08:03:55 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:46.605 08:03:57 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:46.605 08:03:57 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:46.605 08:03:57 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.605 08:03:57 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:46.605 08:03:57 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.605 08:03:57 -- common/autotest_common.sh@1197 -- # return 0 00:12:46.605 08:03:57 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.605 08:03:57 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.605 08:03:57 -- common/autotest_common.sh@1208 -- # local i=0 00:12:46.605 08:03:57 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:46.605 08:03:57 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.605 08:03:57 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:46.605 08:03:57 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.605 08:03:57 -- common/autotest_common.sh@1220 -- # return 0 00:12:46.605 08:03:57 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.605 08:03:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.605 08:03:57 -- common/autotest_common.sh@10 -- # set +x 00:12:46.605 08:03:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.605 08:03:57 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.605 08:03:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.605 08:03:57 -- common/autotest_common.sh@10 -- # set +x 00:12:46.605 08:03:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.605 08:03:57 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:46.605 08:03:57 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.605 08:03:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.605 08:03:57 -- common/autotest_common.sh@10 -- # set +x 00:12:46.605 08:03:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.605 08:03:57 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.605 08:03:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.605 08:03:57 -- common/autotest_common.sh@10 -- # set +x 00:12:46.605 [2024-12-07 08:03:57.867447] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.605 08:03:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.605 08:03:57 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:46.605 08:03:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.605 08:03:57 -- common/autotest_common.sh@10 -- # set +x 00:12:46.863 08:03:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.863 08:03:57 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.863 08:03:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.863 08:03:57 -- common/autotest_common.sh@10 -- # set +x 00:12:46.863 08:03:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.863 08:03:57 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.863 08:03:58 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.863 08:03:58 -- common/autotest_common.sh@1187 -- # local i=0 00:12:46.863 08:03:58 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.863 08:03:58 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:46.863 08:03:58 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:49.392 08:04:00 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:49.392 08:04:00 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:49.392 08:04:00 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.392 08:04:00 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:49.392 08:04:00 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.392 08:04:00 -- common/autotest_common.sh@1197 -- # return 0 00:12:49.392 08:04:00 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.392 08:04:00 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.392 08:04:00 -- common/autotest_common.sh@1208 -- # local i=0 00:12:49.392 08:04:00 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:49.392 08:04:00 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.392 08:04:00 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:49.392 08:04:00 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.392 08:04:00 -- common/autotest_common.sh@1220 -- # return 0 00:12:49.392 08:04:00 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.392 08:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.392 08:04:00 -- common/autotest_common.sh@10 -- # set +x 00:12:49.392 08:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.392 08:04:00 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.392 08:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.392 08:04:00 -- common/autotest_common.sh@10 -- # set +x 00:12:49.392 08:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.392 08:04:00 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:49.392 08:04:00 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.392 08:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.392 08:04:00 -- common/autotest_common.sh@10 -- # set +x 00:12:49.392 08:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.392 08:04:00 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.392 08:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.392 08:04:00 -- common/autotest_common.sh@10 -- # set +x 00:12:49.393 [2024-12-07 08:04:00.184701] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.393 08:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.393 08:04:00 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:49.393 08:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.393 08:04:00 -- common/autotest_common.sh@10 -- # set +x 00:12:49.393 08:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.393 08:04:00 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.393 08:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.393 08:04:00 -- common/autotest_common.sh@10 -- # set +x 00:12:49.393 08:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.393 08:04:00 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.393 08:04:00 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.393 08:04:00 -- common/autotest_common.sh@1187 -- # local i=0 00:12:49.393 08:04:00 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.393 08:04:00 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:49.393 08:04:00 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:51.295 08:04:02 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:51.295 08:04:02 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:51.295 08:04:02 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.295 08:04:02 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:51.295 08:04:02 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.295 08:04:02 -- common/autotest_common.sh@1197 -- # return 0 00:12:51.295 08:04:02 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.295 08:04:02 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.295 08:04:02 -- common/autotest_common.sh@1208 -- # local i=0 00:12:51.295 08:04:02 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:51.295 08:04:02 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.295 08:04:02 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:51.295 08:04:02 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.295 08:04:02 -- common/autotest_common.sh@1220 -- # return 0 00:12:51.295 08:04:02 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:51.295 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.295 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.295 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.295 08:04:02 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.295 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.295 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.295 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.296 08:04:02 -- target/rpc.sh@99 -- # seq 1 5 00:12:51.296 08:04:02 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.296 08:04:02 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.296 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.296 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.296 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.296 08:04:02 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.296 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.296 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.296 [2024-12-07 08:04:02.517756] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.296 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.296 08:04:02 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.296 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.296 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.296 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.296 08:04:02 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.296 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.296 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.296 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.296 08:04:02 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.296 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.296 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.296 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.296 08:04:02 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.296 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.296 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.296 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.296 08:04:02 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.296 08:04:02 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.296 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.296 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.296 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.296 08:04:02 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.296 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.296 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.296 [2024-12-07 08:04:02.565865] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.555 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.555 08:04:02 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.555 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.555 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.555 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.555 08:04:02 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.555 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.555 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.555 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.555 08:04:02 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.555 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.555 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.555 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.555 08:04:02 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.555 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.555 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.555 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.555 08:04:02 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.555 08:04:02 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.555 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.555 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.555 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.555 08:04:02 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.555 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.555 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.555 [2024-12-07 08:04:02.617921] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.555 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.555 08:04:02 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.555 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.555 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.555 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.555 08:04:02 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.555 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.555 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.555 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.555 08:04:02 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.555 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.555 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.555 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.555 08:04:02 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.555 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.555 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.555 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.555 08:04:02 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.555 08:04:02 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.555 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.555 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.555 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.555 08:04:02 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.555 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.555 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.555 [2024-12-07 08:04:02.665982] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.555 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.555 08:04:02 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.555 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.556 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.556 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.556 08:04:02 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.556 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.556 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.556 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.556 08:04:02 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.556 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.556 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.556 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.556 08:04:02 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.556 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.556 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.556 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.556 08:04:02 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.556 08:04:02 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.556 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.556 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.556 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.556 08:04:02 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.556 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.556 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.556 [2024-12-07 08:04:02.714057] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.556 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.556 08:04:02 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.556 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.556 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.556 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.556 08:04:02 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.556 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.556 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.556 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.556 08:04:02 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.556 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.556 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.556 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.556 08:04:02 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.556 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.556 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.556 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.556 08:04:02 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:51.556 08:04:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.556 08:04:02 -- common/autotest_common.sh@10 -- # set +x 00:12:51.556 08:04:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.556 08:04:02 -- target/rpc.sh@110 -- # stats='{ 00:12:51.556 "poll_groups": [ 00:12:51.556 { 00:12:51.556 "admin_qpairs": 2, 00:12:51.556 "completed_nvme_io": 115, 00:12:51.556 "current_admin_qpairs": 0, 00:12:51.556 "current_io_qpairs": 0, 00:12:51.556 "io_qpairs": 16, 00:12:51.556 "name": "nvmf_tgt_poll_group_0", 00:12:51.556 "pending_bdev_io": 0, 00:12:51.556 "transports": [ 00:12:51.556 { 00:12:51.556 "trtype": "TCP" 00:12:51.556 } 00:12:51.556 ] 00:12:51.556 }, 00:12:51.556 { 00:12:51.556 "admin_qpairs": 3, 00:12:51.556 "completed_nvme_io": 164, 00:12:51.556 "current_admin_qpairs": 0, 00:12:51.556 "current_io_qpairs": 0, 00:12:51.556 "io_qpairs": 17, 00:12:51.556 "name": "nvmf_tgt_poll_group_1", 00:12:51.556 "pending_bdev_io": 0, 00:12:51.556 "transports": [ 00:12:51.556 { 00:12:51.556 "trtype": "TCP" 00:12:51.556 } 00:12:51.556 ] 00:12:51.556 }, 00:12:51.556 { 00:12:51.556 "admin_qpairs": 1, 00:12:51.556 "completed_nvme_io": 70, 00:12:51.556 "current_admin_qpairs": 0, 00:12:51.556 "current_io_qpairs": 0, 00:12:51.556 "io_qpairs": 19, 00:12:51.556 "name": "nvmf_tgt_poll_group_2", 00:12:51.556 "pending_bdev_io": 0, 00:12:51.556 "transports": [ 00:12:51.556 { 00:12:51.556 "trtype": "TCP" 00:12:51.556 } 00:12:51.556 ] 00:12:51.556 }, 00:12:51.556 { 00:12:51.556 "admin_qpairs": 1, 00:12:51.556 "completed_nvme_io": 71, 00:12:51.556 "current_admin_qpairs": 0, 00:12:51.556 "current_io_qpairs": 0, 00:12:51.556 "io_qpairs": 18, 00:12:51.556 "name": "nvmf_tgt_poll_group_3", 00:12:51.556 "pending_bdev_io": 0, 00:12:51.556 "transports": [ 00:12:51.556 { 00:12:51.556 "trtype": "TCP" 00:12:51.556 } 00:12:51.556 ] 00:12:51.556 } 00:12:51.556 ], 00:12:51.556 "tick_rate": 2200000000 00:12:51.556 }' 00:12:51.556 08:04:02 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:51.556 08:04:02 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:51.556 08:04:02 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:51.556 08:04:02 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:51.556 08:04:02 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:51.815 08:04:02 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:51.815 08:04:02 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:51.815 08:04:02 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:51.815 08:04:02 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:51.815 08:04:02 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:51.815 08:04:02 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:51.815 08:04:02 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:51.815 08:04:02 -- target/rpc.sh@123 -- # nvmftestfini 00:12:51.815 08:04:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:51.815 08:04:02 -- nvmf/common.sh@116 -- # sync 00:12:51.815 08:04:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:51.815 08:04:02 -- nvmf/common.sh@119 -- # set +e 00:12:51.815 08:04:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:51.815 08:04:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:51.815 rmmod nvme_tcp 00:12:51.815 rmmod nvme_fabrics 00:12:51.815 rmmod nvme_keyring 00:12:51.815 08:04:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:51.815 08:04:02 -- nvmf/common.sh@123 -- # set -e 00:12:51.815 08:04:02 -- nvmf/common.sh@124 -- # return 0 00:12:51.815 08:04:02 -- nvmf/common.sh@477 -- # '[' -n 78050 ']' 00:12:51.815 08:04:02 -- nvmf/common.sh@478 -- # killprocess 78050 00:12:51.815 08:04:02 -- common/autotest_common.sh@936 -- # '[' -z 78050 ']' 00:12:51.815 08:04:02 -- common/autotest_common.sh@940 -- # kill -0 78050 00:12:51.815 08:04:02 -- common/autotest_common.sh@941 -- # uname 00:12:51.815 08:04:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:51.815 08:04:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78050 00:12:51.815 killing process with pid 78050 00:12:51.815 08:04:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:51.815 08:04:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:51.815 08:04:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78050' 00:12:51.815 08:04:03 -- common/autotest_common.sh@955 -- # kill 78050 00:12:51.815 08:04:03 -- common/autotest_common.sh@960 -- # wait 78050 00:12:52.074 08:04:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:52.074 08:04:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:52.074 08:04:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:52.074 08:04:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.074 08:04:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:52.074 08:04:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.074 08:04:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.074 08:04:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.074 08:04:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:52.074 00:12:52.074 real 0m19.203s 00:12:52.074 user 1m12.783s 00:12:52.074 sys 0m2.077s 00:12:52.074 08:04:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:52.074 08:04:03 -- common/autotest_common.sh@10 -- # set +x 00:12:52.074 ************************************ 00:12:52.074 END TEST nvmf_rpc 00:12:52.074 ************************************ 00:12:52.074 08:04:03 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:52.074 08:04:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:52.074 08:04:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:52.074 08:04:03 -- common/autotest_common.sh@10 -- # set +x 00:12:52.074 ************************************ 00:12:52.074 START TEST nvmf_invalid 00:12:52.074 ************************************ 00:12:52.074 08:04:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:52.334 * Looking for test storage... 00:12:52.334 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:52.334 08:04:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:52.334 08:04:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:52.334 08:04:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:52.334 08:04:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:52.334 08:04:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:52.334 08:04:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:52.334 08:04:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:52.334 08:04:03 -- scripts/common.sh@335 -- # IFS=.-: 00:12:52.334 08:04:03 -- scripts/common.sh@335 -- # read -ra ver1 00:12:52.334 08:04:03 -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.334 08:04:03 -- scripts/common.sh@336 -- # read -ra ver2 00:12:52.334 08:04:03 -- scripts/common.sh@337 -- # local 'op=<' 00:12:52.334 08:04:03 -- scripts/common.sh@339 -- # ver1_l=2 00:12:52.334 08:04:03 -- scripts/common.sh@340 -- # ver2_l=1 00:12:52.334 08:04:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:52.334 08:04:03 -- scripts/common.sh@343 -- # case "$op" in 00:12:52.334 08:04:03 -- scripts/common.sh@344 -- # : 1 00:12:52.334 08:04:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:52.334 08:04:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.334 08:04:03 -- scripts/common.sh@364 -- # decimal 1 00:12:52.334 08:04:03 -- scripts/common.sh@352 -- # local d=1 00:12:52.334 08:04:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.334 08:04:03 -- scripts/common.sh@354 -- # echo 1 00:12:52.334 08:04:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:52.334 08:04:03 -- scripts/common.sh@365 -- # decimal 2 00:12:52.334 08:04:03 -- scripts/common.sh@352 -- # local d=2 00:12:52.334 08:04:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.334 08:04:03 -- scripts/common.sh@354 -- # echo 2 00:12:52.334 08:04:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:52.334 08:04:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:52.334 08:04:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:52.334 08:04:03 -- scripts/common.sh@367 -- # return 0 00:12:52.334 08:04:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.334 08:04:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:52.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.334 --rc genhtml_branch_coverage=1 00:12:52.334 --rc genhtml_function_coverage=1 00:12:52.334 --rc genhtml_legend=1 00:12:52.334 --rc geninfo_all_blocks=1 00:12:52.334 --rc geninfo_unexecuted_blocks=1 00:12:52.334 00:12:52.334 ' 00:12:52.334 08:04:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:52.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.334 --rc genhtml_branch_coverage=1 00:12:52.334 --rc genhtml_function_coverage=1 00:12:52.334 --rc genhtml_legend=1 00:12:52.334 --rc geninfo_all_blocks=1 00:12:52.334 --rc geninfo_unexecuted_blocks=1 00:12:52.334 00:12:52.334 ' 00:12:52.334 08:04:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:52.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.334 --rc genhtml_branch_coverage=1 00:12:52.334 --rc genhtml_function_coverage=1 00:12:52.334 --rc genhtml_legend=1 00:12:52.334 --rc geninfo_all_blocks=1 00:12:52.334 --rc geninfo_unexecuted_blocks=1 00:12:52.334 00:12:52.334 ' 00:12:52.334 08:04:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:52.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.334 --rc genhtml_branch_coverage=1 00:12:52.334 --rc genhtml_function_coverage=1 00:12:52.334 --rc genhtml_legend=1 00:12:52.334 --rc geninfo_all_blocks=1 00:12:52.334 --rc geninfo_unexecuted_blocks=1 00:12:52.334 00:12:52.334 ' 00:12:52.334 08:04:03 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:52.334 08:04:03 -- nvmf/common.sh@7 -- # uname -s 00:12:52.334 08:04:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.335 08:04:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.335 08:04:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.335 08:04:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.335 08:04:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.335 08:04:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.335 08:04:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.335 08:04:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.335 08:04:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.335 08:04:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.335 08:04:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:12:52.335 08:04:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:12:52.335 08:04:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.335 08:04:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.335 08:04:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:52.335 08:04:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:52.335 08:04:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.335 08:04:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.335 08:04:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.335 08:04:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.335 08:04:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.335 08:04:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.335 08:04:03 -- paths/export.sh@5 -- # export PATH 00:12:52.335 08:04:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.335 08:04:03 -- nvmf/common.sh@46 -- # : 0 00:12:52.335 08:04:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:52.335 08:04:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:52.335 08:04:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:52.335 08:04:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.335 08:04:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.335 08:04:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:52.335 08:04:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:52.335 08:04:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:52.335 08:04:03 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:52.335 08:04:03 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:52.335 08:04:03 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:52.335 08:04:03 -- target/invalid.sh@14 -- # target=foobar 00:12:52.335 08:04:03 -- target/invalid.sh@16 -- # RANDOM=0 00:12:52.335 08:04:03 -- target/invalid.sh@34 -- # nvmftestinit 00:12:52.335 08:04:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:52.335 08:04:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.335 08:04:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:52.335 08:04:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:52.335 08:04:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:52.335 08:04:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.335 08:04:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.335 08:04:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.335 08:04:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:52.335 08:04:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:52.335 08:04:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:52.335 08:04:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:52.335 08:04:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:52.335 08:04:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:52.335 08:04:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.335 08:04:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.335 08:04:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:52.335 08:04:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:52.335 08:04:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:52.335 08:04:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:52.335 08:04:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:52.335 08:04:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.335 08:04:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:52.335 08:04:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:52.335 08:04:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:52.335 08:04:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:52.335 08:04:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:52.335 08:04:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:52.335 Cannot find device "nvmf_tgt_br" 00:12:52.335 08:04:03 -- nvmf/common.sh@154 -- # true 00:12:52.335 08:04:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:52.335 Cannot find device "nvmf_tgt_br2" 00:12:52.335 08:04:03 -- nvmf/common.sh@155 -- # true 00:12:52.335 08:04:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:52.335 08:04:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:52.335 Cannot find device "nvmf_tgt_br" 00:12:52.335 08:04:03 -- nvmf/common.sh@157 -- # true 00:12:52.335 08:04:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:52.594 Cannot find device "nvmf_tgt_br2" 00:12:52.594 08:04:03 -- nvmf/common.sh@158 -- # true 00:12:52.594 08:04:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:52.594 08:04:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:52.594 08:04:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:52.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:52.594 08:04:03 -- nvmf/common.sh@161 -- # true 00:12:52.594 08:04:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:52.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:52.594 08:04:03 -- nvmf/common.sh@162 -- # true 00:12:52.594 08:04:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:52.594 08:04:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:52.594 08:04:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:52.594 08:04:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:52.594 08:04:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:52.594 08:04:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:52.594 08:04:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:52.594 08:04:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:52.594 08:04:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:52.594 08:04:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:52.594 08:04:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:52.594 08:04:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:52.594 08:04:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:52.594 08:04:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:52.594 08:04:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:52.594 08:04:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:52.594 08:04:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:52.594 08:04:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:52.594 08:04:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:52.594 08:04:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:52.594 08:04:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:52.594 08:04:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:52.594 08:04:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:52.594 08:04:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:52.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:12:52.594 00:12:52.594 --- 10.0.0.2 ping statistics --- 00:12:52.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.594 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:12:52.594 08:04:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:52.594 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:52.594 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:12:52.594 00:12:52.594 --- 10.0.0.3 ping statistics --- 00:12:52.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.594 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:52.594 08:04:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:52.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:12:52.594 00:12:52.595 --- 10.0.0.1 ping statistics --- 00:12:52.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.595 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:12:52.595 08:04:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.595 08:04:03 -- nvmf/common.sh@421 -- # return 0 00:12:52.595 08:04:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:52.595 08:04:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.595 08:04:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:52.595 08:04:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:52.595 08:04:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.595 08:04:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:52.595 08:04:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:52.595 08:04:03 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:52.595 08:04:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:52.595 08:04:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:52.595 08:04:03 -- common/autotest_common.sh@10 -- # set +x 00:12:52.854 08:04:03 -- nvmf/common.sh@469 -- # nvmfpid=78570 00:12:52.854 08:04:03 -- nvmf/common.sh@470 -- # waitforlisten 78570 00:12:52.854 08:04:03 -- common/autotest_common.sh@829 -- # '[' -z 78570 ']' 00:12:52.854 08:04:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.854 08:04:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:52.854 08:04:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:52.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.854 08:04:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.854 08:04:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:52.854 08:04:03 -- common/autotest_common.sh@10 -- # set +x 00:12:52.854 [2024-12-07 08:04:03.927715] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:52.854 [2024-12-07 08:04:03.927806] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.854 [2024-12-07 08:04:04.073502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:53.113 [2024-12-07 08:04:04.138879] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:53.113 [2024-12-07 08:04:04.139059] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.113 [2024-12-07 08:04:04.139077] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.113 [2024-12-07 08:04:04.139089] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.113 [2024-12-07 08:04:04.139238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.113 [2024-12-07 08:04:04.139550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.113 [2024-12-07 08:04:04.140018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.113 [2024-12-07 08:04:04.140055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.682 08:04:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:53.682 08:04:04 -- common/autotest_common.sh@862 -- # return 0 00:12:53.682 08:04:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:53.682 08:04:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:53.682 08:04:04 -- common/autotest_common.sh@10 -- # set +x 00:12:53.682 08:04:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.682 08:04:04 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:53.682 08:04:04 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30310 00:12:53.943 [2024-12-07 08:04:05.194273] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:54.202 08:04:05 -- target/invalid.sh@40 -- # out='2024/12/07 08:04:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30310 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:54.202 request: 00:12:54.202 { 00:12:54.202 "method": "nvmf_create_subsystem", 00:12:54.202 "params": { 00:12:54.202 "nqn": "nqn.2016-06.io.spdk:cnode30310", 00:12:54.202 "tgt_name": "foobar" 00:12:54.202 } 00:12:54.202 } 00:12:54.202 Got JSON-RPC error response 00:12:54.202 GoRPCClient: error on JSON-RPC call' 00:12:54.202 08:04:05 -- target/invalid.sh@41 -- # [[ 2024/12/07 08:04:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode30310 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:54.202 request: 00:12:54.202 { 00:12:54.202 "method": "nvmf_create_subsystem", 00:12:54.202 "params": { 00:12:54.202 "nqn": "nqn.2016-06.io.spdk:cnode30310", 00:12:54.202 "tgt_name": "foobar" 00:12:54.202 } 00:12:54.202 } 00:12:54.202 Got JSON-RPC error response 00:12:54.202 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:54.202 08:04:05 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:54.202 08:04:05 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode23228 00:12:54.459 [2024-12-07 08:04:05.498745] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23228: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:54.460 08:04:05 -- target/invalid.sh@45 -- # out='2024/12/07 08:04:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23228 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:54.460 request: 00:12:54.460 { 00:12:54.460 "method": "nvmf_create_subsystem", 00:12:54.460 "params": { 00:12:54.460 "nqn": "nqn.2016-06.io.spdk:cnode23228", 00:12:54.460 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:54.460 } 00:12:54.460 } 00:12:54.460 Got JSON-RPC error response 00:12:54.460 GoRPCClient: error on JSON-RPC call' 00:12:54.460 08:04:05 -- target/invalid.sh@46 -- # [[ 2024/12/07 08:04:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23228 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:54.460 request: 00:12:54.460 { 00:12:54.460 "method": "nvmf_create_subsystem", 00:12:54.460 "params": { 00:12:54.460 "nqn": "nqn.2016-06.io.spdk:cnode23228", 00:12:54.460 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:54.460 } 00:12:54.460 } 00:12:54.460 Got JSON-RPC error response 00:12:54.460 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:54.460 08:04:05 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:54.460 08:04:05 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18134 00:12:54.718 [2024-12-07 08:04:05.791067] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18134: invalid model number 'SPDK_Controller' 00:12:54.718 08:04:05 -- target/invalid.sh@50 -- # out='2024/12/07 08:04:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode18134], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:54.718 request: 00:12:54.718 { 00:12:54.718 "method": "nvmf_create_subsystem", 00:12:54.718 "params": { 00:12:54.718 "nqn": "nqn.2016-06.io.spdk:cnode18134", 00:12:54.718 "model_number": "SPDK_Controller\u001f" 00:12:54.718 } 00:12:54.718 } 00:12:54.718 Got JSON-RPC error response 00:12:54.718 GoRPCClient: error on JSON-RPC call' 00:12:54.718 08:04:05 -- target/invalid.sh@51 -- # [[ 2024/12/07 08:04:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode18134], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:54.718 request: 00:12:54.718 { 00:12:54.718 "method": "nvmf_create_subsystem", 00:12:54.718 "params": { 00:12:54.718 "nqn": "nqn.2016-06.io.spdk:cnode18134", 00:12:54.718 "model_number": "SPDK_Controller\u001f" 00:12:54.718 } 00:12:54.718 } 00:12:54.718 Got JSON-RPC error response 00:12:54.718 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:54.718 08:04:05 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:54.718 08:04:05 -- target/invalid.sh@19 -- # local length=21 ll 00:12:54.718 08:04:05 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:54.718 08:04:05 -- target/invalid.sh@21 -- # local chars 00:12:54.718 08:04:05 -- target/invalid.sh@22 -- # local string 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # printf %x 45 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # string+=- 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # printf %x 92 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # string+='\' 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # printf %x 78 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # string+=N 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # printf %x 105 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # string+=i 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # printf %x 94 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # string+='^' 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # printf %x 36 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # string+='$' 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # printf %x 125 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # string+='}' 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # printf %x 50 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # string+=2 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # printf %x 49 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # string+=1 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # printf %x 71 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # string+=G 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # printf %x 57 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # string+=9 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # printf %x 39 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # string+=\' 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # printf %x 118 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:54.718 08:04:05 -- target/invalid.sh@25 -- # string+=v 00:12:54.718 08:04:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # printf %x 107 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # string+=k 00:12:54.719 08:04:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # printf %x 123 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # string+='{' 00:12:54.719 08:04:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # printf %x 122 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # string+=z 00:12:54.719 08:04:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # printf %x 122 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # string+=z 00:12:54.719 08:04:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # printf %x 102 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # string+=f 00:12:54.719 08:04:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # printf %x 105 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # string+=i 00:12:54.719 08:04:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # printf %x 72 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # string+=H 00:12:54.719 08:04:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # printf %x 69 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:54.719 08:04:05 -- target/invalid.sh@25 -- # string+=E 00:12:54.719 08:04:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.719 08:04:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.719 08:04:05 -- target/invalid.sh@28 -- # [[ - == \- ]] 00:12:54.719 08:04:05 -- target/invalid.sh@29 -- # string='\-\Ni^$}21G9'\''vk{zzfiHE' 00:12:54.719 08:04:05 -- target/invalid.sh@31 -- # echo '\-\Ni^$}21G9'\''vk{zzfiHE' 00:12:54.719 08:04:05 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '\-\Ni^$}21G9'\''vk{zzfiHE' nqn.2016-06.io.spdk:cnode16090 00:12:54.977 [2024-12-07 08:04:06.203729] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16090: invalid serial number '\-\Ni^$}21G9'vk{zzfiHE' 00:12:54.977 08:04:06 -- target/invalid.sh@54 -- # out='2024/12/07 08:04:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode16090 serial_number:\-\Ni^$}21G9'\''vk{zzfiHE], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN \-\Ni^$}21G9'\''vk{zzfiHE 00:12:54.977 request: 00:12:54.977 { 00:12:54.977 "method": "nvmf_create_subsystem", 00:12:54.977 "params": { 00:12:54.977 "nqn": "nqn.2016-06.io.spdk:cnode16090", 00:12:54.977 "serial_number": "\\-\\Ni^$}21G9'\''vk{zzfiHE" 00:12:54.977 } 00:12:54.977 } 00:12:54.977 Got JSON-RPC error response 00:12:54.977 GoRPCClient: error on JSON-RPC call' 00:12:54.977 08:04:06 -- target/invalid.sh@55 -- # [[ 2024/12/07 08:04:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode16090 serial_number:\-\Ni^$}21G9'vk{zzfiHE], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN \-\Ni^$}21G9'vk{zzfiHE 00:12:54.977 request: 00:12:54.977 { 00:12:54.977 "method": "nvmf_create_subsystem", 00:12:54.977 "params": { 00:12:54.977 "nqn": "nqn.2016-06.io.spdk:cnode16090", 00:12:54.977 "serial_number": "\\-\\Ni^$}21G9'vk{zzfiHE" 00:12:54.977 } 00:12:54.977 } 00:12:54.977 Got JSON-RPC error response 00:12:54.977 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:54.977 08:04:06 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:54.977 08:04:06 -- target/invalid.sh@19 -- # local length=41 ll 00:12:54.977 08:04:06 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:54.977 08:04:06 -- target/invalid.sh@21 -- # local chars 00:12:54.977 08:04:06 -- target/invalid.sh@22 -- # local string 00:12:54.978 08:04:06 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:54.978 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.978 08:04:06 -- target/invalid.sh@25 -- # printf %x 123 00:12:54.978 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:54.978 08:04:06 -- target/invalid.sh@25 -- # string+='{' 00:12:54.978 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.978 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:54.978 08:04:06 -- target/invalid.sh@25 -- # printf %x 117 00:12:54.978 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:54.978 08:04:06 -- target/invalid.sh@25 -- # string+=u 00:12:54.978 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:54.978 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 82 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=R 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 49 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=1 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 84 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=T 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 126 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+='~' 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 87 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=W 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 74 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=J 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 102 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=f 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 49 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=1 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 36 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+='$' 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 71 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=G 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 52 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=4 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 56 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=8 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 76 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=L 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 88 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=X 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 35 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+='#' 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 34 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+='"' 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 77 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=M 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 39 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=\' 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 97 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=a 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 87 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=W 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 63 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+='?' 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 73 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=I 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 73 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=I 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 53 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=5 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 59 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=';' 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 95 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=_ 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 72 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=H 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 75 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=K 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 65 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=A 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 96 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+='`' 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 90 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=Z 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 76 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=L 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 126 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+='~' 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 37 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=% 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 120 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=x 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 59 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=';' 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 51 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=3 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 32 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=' ' 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # printf %x 81 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:55.237 08:04:06 -- target/invalid.sh@25 -- # string+=Q 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:55.237 08:04:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:55.237 08:04:06 -- target/invalid.sh@28 -- # [[ { == \- ]] 00:12:55.237 08:04:06 -- target/invalid.sh@31 -- # echo '{uR1T~WJf1$G48LX#"M'\''aW?II5;_HKA`ZL~%x;3 Q' 00:12:55.237 08:04:06 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '{uR1T~WJf1$G48LX#"M'\''aW?II5;_HKA`ZL~%x;3 Q' nqn.2016-06.io.spdk:cnode18530 00:12:55.496 [2024-12-07 08:04:06.728510] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18530: invalid model number '{uR1T~WJf1$G48LX#"M'aW?II5;_HKA`ZL~%x;3 Q' 00:12:55.496 08:04:06 -- target/invalid.sh@58 -- # out='2024/12/07 08:04:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:{uR1T~WJf1$G48LX#"M'\''aW?II5;_HKA`ZL~%x;3 Q nqn:nqn.2016-06.io.spdk:cnode18530], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN {uR1T~WJf1$G48LX#"M'\''aW?II5;_HKA`ZL~%x;3 Q 00:12:55.496 request: 00:12:55.496 { 00:12:55.496 "method": "nvmf_create_subsystem", 00:12:55.496 "params": { 00:12:55.496 "nqn": "nqn.2016-06.io.spdk:cnode18530", 00:12:55.496 "model_number": "{uR1T~WJf1$G48LX#\"M'\''aW?II5;_HKA`ZL~%x;3 Q" 00:12:55.496 } 00:12:55.496 } 00:12:55.496 Got JSON-RPC error response 00:12:55.496 GoRPCClient: error on JSON-RPC call' 00:12:55.496 08:04:06 -- target/invalid.sh@59 -- # [[ 2024/12/07 08:04:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:{uR1T~WJf1$G48LX#"M'aW?II5;_HKA`ZL~%x;3 Q nqn:nqn.2016-06.io.spdk:cnode18530], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN {uR1T~WJf1$G48LX#"M'aW?II5;_HKA`ZL~%x;3 Q 00:12:55.496 request: 00:12:55.496 { 00:12:55.496 "method": "nvmf_create_subsystem", 00:12:55.496 "params": { 00:12:55.496 "nqn": "nqn.2016-06.io.spdk:cnode18530", 00:12:55.496 "model_number": "{uR1T~WJf1$G48LX#\"M'aW?II5;_HKA`ZL~%x;3 Q" 00:12:55.496 } 00:12:55.496 } 00:12:55.496 Got JSON-RPC error response 00:12:55.496 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:55.496 08:04:06 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:55.755 [2024-12-07 08:04:07.017011] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.013 08:04:07 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:56.272 08:04:07 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:56.272 08:04:07 -- target/invalid.sh@67 -- # echo '' 00:12:56.272 08:04:07 -- target/invalid.sh@67 -- # head -n 1 00:12:56.272 08:04:07 -- target/invalid.sh@67 -- # IP= 00:12:56.272 08:04:07 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:56.272 [2024-12-07 08:04:07.546047] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:56.531 08:04:07 -- target/invalid.sh@69 -- # out='2024/12/07 08:04:07 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:56.531 request: 00:12:56.531 { 00:12:56.531 "method": "nvmf_subsystem_remove_listener", 00:12:56.531 "params": { 00:12:56.531 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:56.531 "listen_address": { 00:12:56.531 "trtype": "tcp", 00:12:56.531 "traddr": "", 00:12:56.531 "trsvcid": "4421" 00:12:56.531 } 00:12:56.531 } 00:12:56.531 } 00:12:56.531 Got JSON-RPC error response 00:12:56.531 GoRPCClient: error on JSON-RPC call' 00:12:56.531 08:04:07 -- target/invalid.sh@70 -- # [[ 2024/12/07 08:04:07 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:56.531 request: 00:12:56.531 { 00:12:56.531 "method": "nvmf_subsystem_remove_listener", 00:12:56.531 "params": { 00:12:56.531 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:56.531 "listen_address": { 00:12:56.531 "trtype": "tcp", 00:12:56.531 "traddr": "", 00:12:56.531 "trsvcid": "4421" 00:12:56.531 } 00:12:56.531 } 00:12:56.531 } 00:12:56.531 Got JSON-RPC error response 00:12:56.531 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:56.531 08:04:07 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3212 -i 0 00:12:56.790 [2024-12-07 08:04:07.814341] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3212: invalid cntlid range [0-65519] 00:12:56.790 08:04:07 -- target/invalid.sh@73 -- # out='2024/12/07 08:04:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode3212], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:56.790 request: 00:12:56.790 { 00:12:56.790 "method": "nvmf_create_subsystem", 00:12:56.790 "params": { 00:12:56.790 "nqn": "nqn.2016-06.io.spdk:cnode3212", 00:12:56.790 "min_cntlid": 0 00:12:56.790 } 00:12:56.790 } 00:12:56.790 Got JSON-RPC error response 00:12:56.790 GoRPCClient: error on JSON-RPC call' 00:12:56.790 08:04:07 -- target/invalid.sh@74 -- # [[ 2024/12/07 08:04:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode3212], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:56.790 request: 00:12:56.790 { 00:12:56.790 "method": "nvmf_create_subsystem", 00:12:56.790 "params": { 00:12:56.790 "nqn": "nqn.2016-06.io.spdk:cnode3212", 00:12:56.790 "min_cntlid": 0 00:12:56.790 } 00:12:56.790 } 00:12:56.790 Got JSON-RPC error response 00:12:56.790 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:56.790 08:04:07 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17917 -i 65520 00:12:57.049 [2024-12-07 08:04:08.110818] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17917: invalid cntlid range [65520-65519] 00:12:57.049 08:04:08 -- target/invalid.sh@75 -- # out='2024/12/07 08:04:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode17917], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:57.049 request: 00:12:57.049 { 00:12:57.049 "method": "nvmf_create_subsystem", 00:12:57.049 "params": { 00:12:57.049 "nqn": "nqn.2016-06.io.spdk:cnode17917", 00:12:57.049 "min_cntlid": 65520 00:12:57.049 } 00:12:57.049 } 00:12:57.049 Got JSON-RPC error response 00:12:57.049 GoRPCClient: error on JSON-RPC call' 00:12:57.049 08:04:08 -- target/invalid.sh@76 -- # [[ 2024/12/07 08:04:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode17917], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:57.049 request: 00:12:57.049 { 00:12:57.049 "method": "nvmf_create_subsystem", 00:12:57.049 "params": { 00:12:57.049 "nqn": "nqn.2016-06.io.spdk:cnode17917", 00:12:57.049 "min_cntlid": 65520 00:12:57.049 } 00:12:57.049 } 00:12:57.049 Got JSON-RPC error response 00:12:57.049 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.049 08:04:08 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29350 -I 0 00:12:57.308 [2024-12-07 08:04:08.327153] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29350: invalid cntlid range [1-0] 00:12:57.308 08:04:08 -- target/invalid.sh@77 -- # out='2024/12/07 08:04:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode29350], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:57.308 request: 00:12:57.308 { 00:12:57.308 "method": "nvmf_create_subsystem", 00:12:57.308 "params": { 00:12:57.308 "nqn": "nqn.2016-06.io.spdk:cnode29350", 00:12:57.308 "max_cntlid": 0 00:12:57.308 } 00:12:57.308 } 00:12:57.308 Got JSON-RPC error response 00:12:57.308 GoRPCClient: error on JSON-RPC call' 00:12:57.308 08:04:08 -- target/invalid.sh@78 -- # [[ 2024/12/07 08:04:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode29350], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:57.308 request: 00:12:57.308 { 00:12:57.308 "method": "nvmf_create_subsystem", 00:12:57.308 "params": { 00:12:57.308 "nqn": "nqn.2016-06.io.spdk:cnode29350", 00:12:57.308 "max_cntlid": 0 00:12:57.308 } 00:12:57.308 } 00:12:57.308 Got JSON-RPC error response 00:12:57.308 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.308 08:04:08 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3555 -I 65520 00:12:57.308 [2024-12-07 08:04:08.547473] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3555: invalid cntlid range [1-65520] 00:12:57.308 08:04:08 -- target/invalid.sh@79 -- # out='2024/12/07 08:04:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode3555], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:57.308 request: 00:12:57.308 { 00:12:57.308 "method": "nvmf_create_subsystem", 00:12:57.308 "params": { 00:12:57.308 "nqn": "nqn.2016-06.io.spdk:cnode3555", 00:12:57.308 "max_cntlid": 65520 00:12:57.308 } 00:12:57.308 } 00:12:57.308 Got JSON-RPC error response 00:12:57.308 GoRPCClient: error on JSON-RPC call' 00:12:57.308 08:04:08 -- target/invalid.sh@80 -- # [[ 2024/12/07 08:04:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode3555], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:57.308 request: 00:12:57.308 { 00:12:57.308 "method": "nvmf_create_subsystem", 00:12:57.308 "params": { 00:12:57.308 "nqn": "nqn.2016-06.io.spdk:cnode3555", 00:12:57.308 "max_cntlid": 65520 00:12:57.309 } 00:12:57.309 } 00:12:57.309 Got JSON-RPC error response 00:12:57.309 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.309 08:04:08 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15952 -i 6 -I 5 00:12:57.567 [2024-12-07 08:04:08.771842] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15952: invalid cntlid range [6-5] 00:12:57.567 08:04:08 -- target/invalid.sh@83 -- # out='2024/12/07 08:04:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode15952], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:57.567 request: 00:12:57.567 { 00:12:57.567 "method": "nvmf_create_subsystem", 00:12:57.567 "params": { 00:12:57.567 "nqn": "nqn.2016-06.io.spdk:cnode15952", 00:12:57.567 "min_cntlid": 6, 00:12:57.567 "max_cntlid": 5 00:12:57.567 } 00:12:57.567 } 00:12:57.567 Got JSON-RPC error response 00:12:57.567 GoRPCClient: error on JSON-RPC call' 00:12:57.567 08:04:08 -- target/invalid.sh@84 -- # [[ 2024/12/07 08:04:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode15952], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:57.567 request: 00:12:57.567 { 00:12:57.567 "method": "nvmf_create_subsystem", 00:12:57.567 "params": { 00:12:57.567 "nqn": "nqn.2016-06.io.spdk:cnode15952", 00:12:57.567 "min_cntlid": 6, 00:12:57.567 "max_cntlid": 5 00:12:57.567 } 00:12:57.567 } 00:12:57.567 Got JSON-RPC error response 00:12:57.567 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:57.567 08:04:08 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:57.827 08:04:08 -- target/invalid.sh@87 -- # out='request: 00:12:57.827 { 00:12:57.827 "name": "foobar", 00:12:57.827 "method": "nvmf_delete_target", 00:12:57.827 "req_id": 1 00:12:57.827 } 00:12:57.827 Got JSON-RPC error response 00:12:57.827 response: 00:12:57.827 { 00:12:57.827 "code": -32602, 00:12:57.827 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:57.827 }' 00:12:57.827 08:04:08 -- target/invalid.sh@88 -- # [[ request: 00:12:57.827 { 00:12:57.827 "name": "foobar", 00:12:57.827 "method": "nvmf_delete_target", 00:12:57.827 "req_id": 1 00:12:57.827 } 00:12:57.827 Got JSON-RPC error response 00:12:57.827 response: 00:12:57.827 { 00:12:57.827 "code": -32602, 00:12:57.827 "message": "The specified target doesn't exist, cannot delete it." 00:12:57.827 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:57.827 08:04:08 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:57.827 08:04:08 -- target/invalid.sh@91 -- # nvmftestfini 00:12:57.827 08:04:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:57.827 08:04:08 -- nvmf/common.sh@116 -- # sync 00:12:57.827 08:04:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:57.827 08:04:08 -- nvmf/common.sh@119 -- # set +e 00:12:57.827 08:04:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:57.827 08:04:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:57.827 rmmod nvme_tcp 00:12:57.827 rmmod nvme_fabrics 00:12:57.827 rmmod nvme_keyring 00:12:57.827 08:04:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:57.827 08:04:08 -- nvmf/common.sh@123 -- # set -e 00:12:57.827 08:04:08 -- nvmf/common.sh@124 -- # return 0 00:12:57.827 08:04:08 -- nvmf/common.sh@477 -- # '[' -n 78570 ']' 00:12:57.827 08:04:08 -- nvmf/common.sh@478 -- # killprocess 78570 00:12:57.827 08:04:08 -- common/autotest_common.sh@936 -- # '[' -z 78570 ']' 00:12:57.827 08:04:08 -- common/autotest_common.sh@940 -- # kill -0 78570 00:12:57.827 08:04:08 -- common/autotest_common.sh@941 -- # uname 00:12:57.827 08:04:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:57.828 08:04:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78570 00:12:57.828 killing process with pid 78570 00:12:57.828 08:04:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:57.828 08:04:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:57.828 08:04:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78570' 00:12:57.828 08:04:09 -- common/autotest_common.sh@955 -- # kill 78570 00:12:57.828 08:04:09 -- common/autotest_common.sh@960 -- # wait 78570 00:12:58.087 08:04:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:58.087 08:04:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:58.087 08:04:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:58.087 08:04:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.087 08:04:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:58.087 08:04:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.087 08:04:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.087 08:04:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.087 08:04:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:58.087 00:12:58.087 real 0m5.912s 00:12:58.087 user 0m23.756s 00:12:58.087 sys 0m1.261s 00:12:58.087 08:04:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:58.087 08:04:09 -- common/autotest_common.sh@10 -- # set +x 00:12:58.087 ************************************ 00:12:58.087 END TEST nvmf_invalid 00:12:58.087 ************************************ 00:12:58.087 08:04:09 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:58.087 08:04:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:58.087 08:04:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:58.087 08:04:09 -- common/autotest_common.sh@10 -- # set +x 00:12:58.087 ************************************ 00:12:58.088 START TEST nvmf_abort 00:12:58.088 ************************************ 00:12:58.088 08:04:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:58.088 * Looking for test storage... 00:12:58.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:58.347 08:04:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:58.347 08:04:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:58.347 08:04:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:58.347 08:04:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:58.347 08:04:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:58.347 08:04:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:58.347 08:04:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:58.347 08:04:09 -- scripts/common.sh@335 -- # IFS=.-: 00:12:58.347 08:04:09 -- scripts/common.sh@335 -- # read -ra ver1 00:12:58.347 08:04:09 -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.347 08:04:09 -- scripts/common.sh@336 -- # read -ra ver2 00:12:58.347 08:04:09 -- scripts/common.sh@337 -- # local 'op=<' 00:12:58.347 08:04:09 -- scripts/common.sh@339 -- # ver1_l=2 00:12:58.347 08:04:09 -- scripts/common.sh@340 -- # ver2_l=1 00:12:58.347 08:04:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:58.347 08:04:09 -- scripts/common.sh@343 -- # case "$op" in 00:12:58.347 08:04:09 -- scripts/common.sh@344 -- # : 1 00:12:58.347 08:04:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:58.347 08:04:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.347 08:04:09 -- scripts/common.sh@364 -- # decimal 1 00:12:58.347 08:04:09 -- scripts/common.sh@352 -- # local d=1 00:12:58.347 08:04:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.347 08:04:09 -- scripts/common.sh@354 -- # echo 1 00:12:58.347 08:04:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:58.347 08:04:09 -- scripts/common.sh@365 -- # decimal 2 00:12:58.347 08:04:09 -- scripts/common.sh@352 -- # local d=2 00:12:58.347 08:04:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.347 08:04:09 -- scripts/common.sh@354 -- # echo 2 00:12:58.347 08:04:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:58.347 08:04:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:58.347 08:04:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:58.347 08:04:09 -- scripts/common.sh@367 -- # return 0 00:12:58.347 08:04:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.347 08:04:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:58.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.347 --rc genhtml_branch_coverage=1 00:12:58.347 --rc genhtml_function_coverage=1 00:12:58.347 --rc genhtml_legend=1 00:12:58.347 --rc geninfo_all_blocks=1 00:12:58.347 --rc geninfo_unexecuted_blocks=1 00:12:58.347 00:12:58.347 ' 00:12:58.347 08:04:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:58.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.347 --rc genhtml_branch_coverage=1 00:12:58.347 --rc genhtml_function_coverage=1 00:12:58.347 --rc genhtml_legend=1 00:12:58.347 --rc geninfo_all_blocks=1 00:12:58.347 --rc geninfo_unexecuted_blocks=1 00:12:58.347 00:12:58.347 ' 00:12:58.347 08:04:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:58.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.347 --rc genhtml_branch_coverage=1 00:12:58.347 --rc genhtml_function_coverage=1 00:12:58.347 --rc genhtml_legend=1 00:12:58.347 --rc geninfo_all_blocks=1 00:12:58.347 --rc geninfo_unexecuted_blocks=1 00:12:58.347 00:12:58.347 ' 00:12:58.347 08:04:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:58.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.347 --rc genhtml_branch_coverage=1 00:12:58.347 --rc genhtml_function_coverage=1 00:12:58.347 --rc genhtml_legend=1 00:12:58.347 --rc geninfo_all_blocks=1 00:12:58.347 --rc geninfo_unexecuted_blocks=1 00:12:58.347 00:12:58.347 ' 00:12:58.347 08:04:09 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:58.347 08:04:09 -- nvmf/common.sh@7 -- # uname -s 00:12:58.347 08:04:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.347 08:04:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.347 08:04:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.347 08:04:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.347 08:04:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.347 08:04:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.347 08:04:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.347 08:04:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.347 08:04:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.347 08:04:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.347 08:04:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:12:58.347 08:04:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:12:58.347 08:04:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.347 08:04:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.347 08:04:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:58.347 08:04:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:58.347 08:04:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.347 08:04:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.347 08:04:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.347 08:04:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.347 08:04:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.347 08:04:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.347 08:04:09 -- paths/export.sh@5 -- # export PATH 00:12:58.347 08:04:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.347 08:04:09 -- nvmf/common.sh@46 -- # : 0 00:12:58.347 08:04:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:58.347 08:04:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:58.347 08:04:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:58.347 08:04:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.347 08:04:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.347 08:04:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:58.347 08:04:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:58.347 08:04:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:58.347 08:04:09 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:58.347 08:04:09 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:58.347 08:04:09 -- target/abort.sh@14 -- # nvmftestinit 00:12:58.347 08:04:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:58.347 08:04:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.347 08:04:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:58.347 08:04:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:58.347 08:04:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:58.347 08:04:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.347 08:04:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.347 08:04:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.347 08:04:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:58.348 08:04:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:58.348 08:04:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:58.348 08:04:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:58.348 08:04:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:58.348 08:04:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:58.348 08:04:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.348 08:04:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:58.348 08:04:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:58.348 08:04:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:58.348 08:04:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:58.348 08:04:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:58.348 08:04:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:58.348 08:04:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.348 08:04:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:58.348 08:04:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:58.348 08:04:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:58.348 08:04:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:58.348 08:04:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:58.348 08:04:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:58.348 Cannot find device "nvmf_tgt_br" 00:12:58.348 08:04:09 -- nvmf/common.sh@154 -- # true 00:12:58.348 08:04:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:58.348 Cannot find device "nvmf_tgt_br2" 00:12:58.348 08:04:09 -- nvmf/common.sh@155 -- # true 00:12:58.348 08:04:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:58.348 08:04:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:58.348 Cannot find device "nvmf_tgt_br" 00:12:58.348 08:04:09 -- nvmf/common.sh@157 -- # true 00:12:58.348 08:04:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:58.348 Cannot find device "nvmf_tgt_br2" 00:12:58.348 08:04:09 -- nvmf/common.sh@158 -- # true 00:12:58.348 08:04:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:58.348 08:04:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:58.348 08:04:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:58.607 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.607 08:04:09 -- nvmf/common.sh@161 -- # true 00:12:58.607 08:04:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:58.607 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.607 08:04:09 -- nvmf/common.sh@162 -- # true 00:12:58.607 08:04:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:58.607 08:04:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:58.607 08:04:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:58.607 08:04:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:58.607 08:04:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:58.607 08:04:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:58.607 08:04:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:58.607 08:04:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:58.607 08:04:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:58.607 08:04:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:58.607 08:04:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:58.607 08:04:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:58.607 08:04:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:58.607 08:04:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:58.607 08:04:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:58.607 08:04:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:58.607 08:04:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:58.607 08:04:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:58.607 08:04:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:58.607 08:04:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:58.607 08:04:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:58.607 08:04:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:58.607 08:04:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:58.607 08:04:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:58.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:58.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:12:58.607 00:12:58.607 --- 10.0.0.2 ping statistics --- 00:12:58.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.607 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:12:58.607 08:04:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:58.607 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:58.607 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:12:58.607 00:12:58.607 --- 10.0.0.3 ping statistics --- 00:12:58.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.607 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:58.607 08:04:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:58.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:58.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:12:58.607 00:12:58.607 --- 10.0.0.1 ping statistics --- 00:12:58.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.607 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:12:58.607 08:04:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:58.607 08:04:09 -- nvmf/common.sh@421 -- # return 0 00:12:58.607 08:04:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:58.607 08:04:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:58.607 08:04:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:58.607 08:04:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:58.607 08:04:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:58.607 08:04:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:58.607 08:04:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:58.607 08:04:09 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:58.607 08:04:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:58.607 08:04:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:58.607 08:04:09 -- common/autotest_common.sh@10 -- # set +x 00:12:58.607 08:04:09 -- nvmf/common.sh@469 -- # nvmfpid=79084 00:12:58.607 08:04:09 -- nvmf/common.sh@470 -- # waitforlisten 79084 00:12:58.607 08:04:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:58.607 08:04:09 -- common/autotest_common.sh@829 -- # '[' -z 79084 ']' 00:12:58.607 08:04:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.607 08:04:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:58.607 08:04:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.607 08:04:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:58.607 08:04:09 -- common/autotest_common.sh@10 -- # set +x 00:12:58.866 [2024-12-07 08:04:09.882646] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:58.866 [2024-12-07 08:04:09.882748] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.866 [2024-12-07 08:04:10.028692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:58.866 [2024-12-07 08:04:10.102593] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:58.866 [2024-12-07 08:04:10.102767] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.866 [2024-12-07 08:04:10.102785] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.866 [2024-12-07 08:04:10.102797] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.866 [2024-12-07 08:04:10.102971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.866 [2024-12-07 08:04:10.103228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:58.866 [2024-12-07 08:04:10.103233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.804 08:04:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:59.804 08:04:10 -- common/autotest_common.sh@862 -- # return 0 00:12:59.804 08:04:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:59.804 08:04:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:59.804 08:04:10 -- common/autotest_common.sh@10 -- # set +x 00:12:59.804 08:04:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.804 08:04:10 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:59.804 08:04:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.804 08:04:10 -- common/autotest_common.sh@10 -- # set +x 00:12:59.804 [2024-12-07 08:04:10.871838] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.804 08:04:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.804 08:04:10 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:59.804 08:04:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.804 08:04:10 -- common/autotest_common.sh@10 -- # set +x 00:12:59.804 Malloc0 00:12:59.804 08:04:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.804 08:04:10 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:59.804 08:04:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.804 08:04:10 -- common/autotest_common.sh@10 -- # set +x 00:12:59.804 Delay0 00:12:59.804 08:04:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.804 08:04:10 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:59.804 08:04:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.804 08:04:10 -- common/autotest_common.sh@10 -- # set +x 00:12:59.804 08:04:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.804 08:04:10 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:59.804 08:04:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.804 08:04:10 -- common/autotest_common.sh@10 -- # set +x 00:12:59.804 08:04:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.804 08:04:10 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:59.804 08:04:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.804 08:04:10 -- common/autotest_common.sh@10 -- # set +x 00:12:59.804 [2024-12-07 08:04:10.942390] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.804 08:04:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.804 08:04:10 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:59.804 08:04:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.804 08:04:10 -- common/autotest_common.sh@10 -- # set +x 00:12:59.804 08:04:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.804 08:04:10 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:00.063 [2024-12-07 08:04:11.122553] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:01.966 Initializing NVMe Controllers 00:13:01.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:01.966 controller IO queue size 128 less than required 00:13:01.966 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:01.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:01.966 Initialization complete. Launching workers. 00:13:01.966 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 35895 00:13:01.966 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 35956, failed to submit 62 00:13:01.966 success 35895, unsuccess 61, failed 0 00:13:01.966 08:04:13 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:01.966 08:04:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.966 08:04:13 -- common/autotest_common.sh@10 -- # set +x 00:13:01.966 08:04:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.966 08:04:13 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:01.966 08:04:13 -- target/abort.sh@38 -- # nvmftestfini 00:13:01.966 08:04:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:01.966 08:04:13 -- nvmf/common.sh@116 -- # sync 00:13:01.966 08:04:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:01.966 08:04:13 -- nvmf/common.sh@119 -- # set +e 00:13:01.966 08:04:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:01.966 08:04:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:01.966 rmmod nvme_tcp 00:13:02.225 rmmod nvme_fabrics 00:13:02.225 rmmod nvme_keyring 00:13:02.225 08:04:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:02.225 08:04:13 -- nvmf/common.sh@123 -- # set -e 00:13:02.225 08:04:13 -- nvmf/common.sh@124 -- # return 0 00:13:02.225 08:04:13 -- nvmf/common.sh@477 -- # '[' -n 79084 ']' 00:13:02.225 08:04:13 -- nvmf/common.sh@478 -- # killprocess 79084 00:13:02.225 08:04:13 -- common/autotest_common.sh@936 -- # '[' -z 79084 ']' 00:13:02.225 08:04:13 -- common/autotest_common.sh@940 -- # kill -0 79084 00:13:02.225 08:04:13 -- common/autotest_common.sh@941 -- # uname 00:13:02.225 08:04:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:02.225 08:04:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79084 00:13:02.225 killing process with pid 79084 00:13:02.225 08:04:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:02.225 08:04:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:02.225 08:04:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79084' 00:13:02.225 08:04:13 -- common/autotest_common.sh@955 -- # kill 79084 00:13:02.225 08:04:13 -- common/autotest_common.sh@960 -- # wait 79084 00:13:02.484 08:04:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:02.484 08:04:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:02.484 08:04:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:02.484 08:04:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:02.484 08:04:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:02.484 08:04:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.484 08:04:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.484 08:04:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.484 08:04:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:02.484 00:13:02.484 real 0m4.262s 00:13:02.484 user 0m12.237s 00:13:02.484 sys 0m1.005s 00:13:02.484 08:04:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:02.484 08:04:13 -- common/autotest_common.sh@10 -- # set +x 00:13:02.484 ************************************ 00:13:02.484 END TEST nvmf_abort 00:13:02.484 ************************************ 00:13:02.484 08:04:13 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:02.484 08:04:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:02.484 08:04:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:02.484 08:04:13 -- common/autotest_common.sh@10 -- # set +x 00:13:02.484 ************************************ 00:13:02.484 START TEST nvmf_ns_hotplug_stress 00:13:02.484 ************************************ 00:13:02.484 08:04:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:02.484 * Looking for test storage... 00:13:02.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:02.485 08:04:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:02.485 08:04:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:02.485 08:04:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:02.743 08:04:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:02.743 08:04:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:02.743 08:04:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:02.743 08:04:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:02.743 08:04:13 -- scripts/common.sh@335 -- # IFS=.-: 00:13:02.743 08:04:13 -- scripts/common.sh@335 -- # read -ra ver1 00:13:02.743 08:04:13 -- scripts/common.sh@336 -- # IFS=.-: 00:13:02.743 08:04:13 -- scripts/common.sh@336 -- # read -ra ver2 00:13:02.743 08:04:13 -- scripts/common.sh@337 -- # local 'op=<' 00:13:02.743 08:04:13 -- scripts/common.sh@339 -- # ver1_l=2 00:13:02.743 08:04:13 -- scripts/common.sh@340 -- # ver2_l=1 00:13:02.743 08:04:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:02.743 08:04:13 -- scripts/common.sh@343 -- # case "$op" in 00:13:02.743 08:04:13 -- scripts/common.sh@344 -- # : 1 00:13:02.743 08:04:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:02.743 08:04:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:02.743 08:04:13 -- scripts/common.sh@364 -- # decimal 1 00:13:02.743 08:04:13 -- scripts/common.sh@352 -- # local d=1 00:13:02.743 08:04:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:02.743 08:04:13 -- scripts/common.sh@354 -- # echo 1 00:13:02.743 08:04:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:02.743 08:04:13 -- scripts/common.sh@365 -- # decimal 2 00:13:02.743 08:04:13 -- scripts/common.sh@352 -- # local d=2 00:13:02.743 08:04:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:02.743 08:04:13 -- scripts/common.sh@354 -- # echo 2 00:13:02.743 08:04:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:02.743 08:04:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:02.743 08:04:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:02.743 08:04:13 -- scripts/common.sh@367 -- # return 0 00:13:02.743 08:04:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:02.743 08:04:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:02.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.743 --rc genhtml_branch_coverage=1 00:13:02.743 --rc genhtml_function_coverage=1 00:13:02.743 --rc genhtml_legend=1 00:13:02.743 --rc geninfo_all_blocks=1 00:13:02.743 --rc geninfo_unexecuted_blocks=1 00:13:02.743 00:13:02.744 ' 00:13:02.744 08:04:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:02.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.744 --rc genhtml_branch_coverage=1 00:13:02.744 --rc genhtml_function_coverage=1 00:13:02.744 --rc genhtml_legend=1 00:13:02.744 --rc geninfo_all_blocks=1 00:13:02.744 --rc geninfo_unexecuted_blocks=1 00:13:02.744 00:13:02.744 ' 00:13:02.744 08:04:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:02.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.744 --rc genhtml_branch_coverage=1 00:13:02.744 --rc genhtml_function_coverage=1 00:13:02.744 --rc genhtml_legend=1 00:13:02.744 --rc geninfo_all_blocks=1 00:13:02.744 --rc geninfo_unexecuted_blocks=1 00:13:02.744 00:13:02.744 ' 00:13:02.744 08:04:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:02.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:02.744 --rc genhtml_branch_coverage=1 00:13:02.744 --rc genhtml_function_coverage=1 00:13:02.744 --rc genhtml_legend=1 00:13:02.744 --rc geninfo_all_blocks=1 00:13:02.744 --rc geninfo_unexecuted_blocks=1 00:13:02.744 00:13:02.744 ' 00:13:02.744 08:04:13 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:02.744 08:04:13 -- nvmf/common.sh@7 -- # uname -s 00:13:02.744 08:04:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.744 08:04:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.744 08:04:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.744 08:04:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.744 08:04:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.744 08:04:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.744 08:04:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.744 08:04:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.744 08:04:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.744 08:04:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.744 08:04:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:13:02.744 08:04:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:13:02.744 08:04:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.744 08:04:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.744 08:04:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:02.744 08:04:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:02.744 08:04:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.744 08:04:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.744 08:04:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.744 08:04:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.744 08:04:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.744 08:04:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.744 08:04:13 -- paths/export.sh@5 -- # export PATH 00:13:02.744 08:04:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.744 08:04:13 -- nvmf/common.sh@46 -- # : 0 00:13:02.744 08:04:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:02.744 08:04:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:02.744 08:04:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:02.744 08:04:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.744 08:04:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.744 08:04:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:02.744 08:04:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:02.744 08:04:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:02.744 08:04:13 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:02.744 08:04:13 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:02.744 08:04:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:02.744 08:04:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.744 08:04:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:02.744 08:04:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:02.744 08:04:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:02.744 08:04:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.744 08:04:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.744 08:04:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.744 08:04:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:02.744 08:04:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:02.744 08:04:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:02.744 08:04:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:02.744 08:04:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:02.744 08:04:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:02.744 08:04:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.744 08:04:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.744 08:04:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:02.744 08:04:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:02.744 08:04:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:02.744 08:04:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:02.744 08:04:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:02.744 08:04:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.744 08:04:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:02.744 08:04:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:02.744 08:04:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:02.744 08:04:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:02.744 08:04:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:02.744 08:04:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:02.744 Cannot find device "nvmf_tgt_br" 00:13:02.744 08:04:13 -- nvmf/common.sh@154 -- # true 00:13:02.744 08:04:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:02.744 Cannot find device "nvmf_tgt_br2" 00:13:02.744 08:04:13 -- nvmf/common.sh@155 -- # true 00:13:02.744 08:04:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:02.744 08:04:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:02.744 Cannot find device "nvmf_tgt_br" 00:13:02.744 08:04:13 -- nvmf/common.sh@157 -- # true 00:13:02.744 08:04:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:02.744 Cannot find device "nvmf_tgt_br2" 00:13:02.744 08:04:13 -- nvmf/common.sh@158 -- # true 00:13:02.744 08:04:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:02.744 08:04:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:02.744 08:04:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:02.744 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:02.744 08:04:13 -- nvmf/common.sh@161 -- # true 00:13:02.744 08:04:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:02.744 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:02.744 08:04:13 -- nvmf/common.sh@162 -- # true 00:13:02.744 08:04:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:02.744 08:04:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:02.744 08:04:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:02.744 08:04:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:02.744 08:04:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:02.744 08:04:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:02.744 08:04:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:02.744 08:04:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:03.003 08:04:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:03.003 08:04:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:03.003 08:04:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:03.003 08:04:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:03.003 08:04:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:03.003 08:04:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:03.003 08:04:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:03.003 08:04:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:03.003 08:04:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:03.003 08:04:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:03.003 08:04:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:03.003 08:04:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:03.003 08:04:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:03.003 08:04:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:03.003 08:04:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:03.003 08:04:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:03.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:13:03.003 00:13:03.003 --- 10.0.0.2 ping statistics --- 00:13:03.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.003 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:13:03.003 08:04:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:03.003 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:03.003 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:13:03.003 00:13:03.003 --- 10.0.0.3 ping statistics --- 00:13:03.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.003 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:13:03.003 08:04:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:03.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:13:03.003 00:13:03.003 --- 10.0.0.1 ping statistics --- 00:13:03.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.003 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:13:03.003 08:04:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.003 08:04:14 -- nvmf/common.sh@421 -- # return 0 00:13:03.003 08:04:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:03.003 08:04:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.003 08:04:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:03.003 08:04:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:03.003 08:04:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.003 08:04:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:03.003 08:04:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:03.003 08:04:14 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:03.003 08:04:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:03.003 08:04:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:03.003 08:04:14 -- common/autotest_common.sh@10 -- # set +x 00:13:03.003 08:04:14 -- nvmf/common.sh@469 -- # nvmfpid=79359 00:13:03.003 08:04:14 -- nvmf/common.sh@470 -- # waitforlisten 79359 00:13:03.003 08:04:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:03.003 08:04:14 -- common/autotest_common.sh@829 -- # '[' -z 79359 ']' 00:13:03.003 08:04:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.003 08:04:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:03.003 08:04:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.003 08:04:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:03.003 08:04:14 -- common/autotest_common.sh@10 -- # set +x 00:13:03.003 [2024-12-07 08:04:14.219554] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:03.003 [2024-12-07 08:04:14.219625] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.262 [2024-12-07 08:04:14.351556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:03.262 [2024-12-07 08:04:14.408834] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:03.262 [2024-12-07 08:04:14.408999] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.262 [2024-12-07 08:04:14.409012] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.262 [2024-12-07 08:04:14.409021] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.262 [2024-12-07 08:04:14.409155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.262 [2024-12-07 08:04:14.409998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.262 [2024-12-07 08:04:14.410071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.196 08:04:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:04.196 08:04:15 -- common/autotest_common.sh@862 -- # return 0 00:13:04.196 08:04:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:04.196 08:04:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:04.196 08:04:15 -- common/autotest_common.sh@10 -- # set +x 00:13:04.196 08:04:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.196 08:04:15 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:04.196 08:04:15 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:04.456 [2024-12-07 08:04:15.551961] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.456 08:04:15 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:04.715 08:04:15 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.715 [2024-12-07 08:04:15.986659] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.973 08:04:16 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:04.973 08:04:16 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:05.537 Malloc0 00:13:05.537 08:04:16 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:05.537 Delay0 00:13:05.537 08:04:16 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.795 08:04:16 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:06.052 NULL1 00:13:06.052 08:04:17 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:06.309 08:04:17 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:06.309 08:04:17 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=79490 00:13:06.309 08:04:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:06.309 08:04:17 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.682 Read completed with error (sct=0, sc=11) 00:13:07.682 08:04:18 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.682 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:07.682 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:07.682 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:07.682 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:07.682 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:07.940 08:04:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:07.940 08:04:19 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:08.198 true 00:13:08.198 08:04:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:08.198 08:04:19 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.763 08:04:20 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:09.280 08:04:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:09.280 08:04:20 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:09.280 true 00:13:09.280 08:04:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:09.280 08:04:20 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.538 08:04:20 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.796 08:04:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:09.796 08:04:20 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:10.055 true 00:13:10.055 08:04:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:10.055 08:04:21 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.991 08:04:22 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.249 08:04:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:11.249 08:04:22 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:11.249 true 00:13:11.249 08:04:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:11.249 08:04:22 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.507 08:04:22 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.765 08:04:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:11.765 08:04:22 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:12.024 true 00:13:12.024 08:04:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:12.024 08:04:23 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.958 08:04:24 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.217 08:04:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:13.217 08:04:24 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:13.475 true 00:13:13.475 08:04:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:13.475 08:04:24 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.733 08:04:24 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.991 08:04:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:13.991 08:04:25 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:13.991 true 00:13:13.991 08:04:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:13.991 08:04:25 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.922 08:04:26 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.180 08:04:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:15.180 08:04:26 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:15.439 true 00:13:15.439 08:04:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:15.439 08:04:26 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.697 08:04:26 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.697 08:04:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:15.697 08:04:26 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:15.955 true 00:13:15.955 08:04:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:15.955 08:04:27 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.888 08:04:28 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.146 08:04:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:17.146 08:04:28 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:17.404 true 00:13:17.404 08:04:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:17.404 08:04:28 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.663 08:04:28 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.921 08:04:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:17.921 08:04:28 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:18.180 true 00:13:18.180 08:04:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:18.180 08:04:29 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.117 08:04:30 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.117 08:04:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:19.117 08:04:30 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:19.375 true 00:13:19.375 08:04:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:19.375 08:04:30 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.638 08:04:30 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.896 08:04:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:19.896 08:04:31 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:20.154 true 00:13:20.154 08:04:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:20.154 08:04:31 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.146 08:04:32 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.146 08:04:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:21.146 08:04:32 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:21.404 true 00:13:21.404 08:04:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:21.404 08:04:32 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.662 08:04:32 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.920 08:04:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:21.920 08:04:33 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:22.177 true 00:13:22.177 08:04:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:22.177 08:04:33 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.113 08:04:34 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.371 08:04:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:23.371 08:04:34 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:23.630 true 00:13:23.630 08:04:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:23.630 08:04:34 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.889 08:04:34 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.146 08:04:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:24.146 08:04:35 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:24.402 true 00:13:24.402 08:04:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:24.402 08:04:35 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.660 08:04:35 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.917 08:04:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:24.917 08:04:35 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:24.917 true 00:13:25.174 08:04:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:25.174 08:04:36 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.111 08:04:37 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.370 08:04:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:26.370 08:04:37 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:26.370 true 00:13:26.629 08:04:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:26.629 08:04:37 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.629 08:04:37 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.888 08:04:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:26.888 08:04:38 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:27.146 true 00:13:27.146 08:04:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:27.146 08:04:38 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.077 08:04:39 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.333 08:04:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:28.333 08:04:39 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:28.589 true 00:13:28.589 08:04:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:28.589 08:04:39 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.846 08:04:40 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.104 08:04:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:29.104 08:04:40 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:29.360 true 00:13:29.360 08:04:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:29.360 08:04:40 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.617 08:04:40 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.874 08:04:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:29.874 08:04:40 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:30.130 true 00:13:30.131 08:04:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:30.131 08:04:41 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.063 08:04:42 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.321 08:04:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:31.321 08:04:42 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:31.580 true 00:13:31.580 08:04:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:31.580 08:04:42 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.838 08:04:43 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.097 08:04:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:32.097 08:04:43 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:32.355 true 00:13:32.355 08:04:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:32.355 08:04:43 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.614 08:04:43 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.874 08:04:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:32.874 08:04:44 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:33.132 true 00:13:33.389 08:04:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:33.389 08:04:44 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.962 08:04:45 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.528 08:04:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:34.528 08:04:45 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:34.528 true 00:13:34.787 08:04:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:34.787 08:04:45 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.145 08:04:46 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.145 08:04:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:35.145 08:04:46 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:35.402 true 00:13:35.402 08:04:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:35.402 08:04:46 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.968 08:04:46 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.968 08:04:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:35.968 08:04:47 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:36.227 true 00:13:36.227 08:04:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:36.227 08:04:47 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.163 Initializing NVMe Controllers 00:13:37.163 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:37.163 Controller IO queue size 128, less than required. 00:13:37.163 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:37.163 Controller IO queue size 128, less than required. 00:13:37.163 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:37.164 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:37.164 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:37.164 Initialization complete. Launching workers. 00:13:37.164 ======================================================== 00:13:37.164 Latency(us) 00:13:37.164 Device Information : IOPS MiB/s Average min max 00:13:37.164 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 305.53 0.15 211415.97 4273.86 1027742.74 00:13:37.164 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10567.17 5.16 12113.21 2724.71 509726.39 00:13:37.164 ======================================================== 00:13:37.164 Total : 10872.70 5.31 17713.81 2724.71 1027742.74 00:13:37.164 00:13:37.164 08:04:48 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.422 08:04:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:37.422 08:04:48 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:37.681 true 00:13:37.681 08:04:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79490 00:13:37.681 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (79490) - No such process 00:13:37.681 08:04:48 -- target/ns_hotplug_stress.sh@53 -- # wait 79490 00:13:37.681 08:04:48 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.940 08:04:49 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:38.199 08:04:49 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:38.199 08:04:49 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:38.199 08:04:49 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:38.199 08:04:49 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:38.199 08:04:49 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:38.199 null0 00:13:38.199 08:04:49 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:38.199 08:04:49 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:38.199 08:04:49 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:38.457 null1 00:13:38.457 08:04:49 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:38.457 08:04:49 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:38.457 08:04:49 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:38.715 null2 00:13:38.715 08:04:49 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:38.715 08:04:49 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:38.715 08:04:49 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:38.973 null3 00:13:38.973 08:04:50 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:38.974 08:04:50 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:38.974 08:04:50 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:39.232 null4 00:13:39.232 08:04:50 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:39.232 08:04:50 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:39.232 08:04:50 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:39.491 null5 00:13:39.491 08:04:50 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:39.491 08:04:50 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:39.491 08:04:50 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:39.751 null6 00:13:39.751 08:04:50 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:39.751 08:04:50 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:39.751 08:04:50 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:39.751 null7 00:13:39.751 08:04:50 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:39.751 08:04:50 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:39.751 08:04:50 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:39.751 08:04:50 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@66 -- # wait 80544 80545 80548 80550 80551 80553 80555 80557 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.751 08:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:40.010 08:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:40.271 08:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:40.271 08:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:40.271 08:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:40.271 08:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:40.271 08:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:40.271 08:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.271 08:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:40.271 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.271 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.271 08:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:40.271 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.271 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.271 08:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:40.530 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.530 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.530 08:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:40.530 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.530 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.530 08:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:40.530 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.530 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.530 08:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:40.530 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.530 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.530 08:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:40.530 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.530 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.530 08:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:40.530 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.530 08:04:51 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.530 08:04:51 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:40.530 08:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:40.530 08:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:40.530 08:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:40.789 08:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:40.789 08:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:40.789 08:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:40.789 08:04:51 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.789 08:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:40.789 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.789 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.789 08:04:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:40.789 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:40.789 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:40.789 08:04:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:41.048 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.048 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.048 08:04:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:41.048 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.048 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.048 08:04:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:41.048 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.048 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.048 08:04:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:41.048 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.048 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.048 08:04:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:41.048 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.048 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.048 08:04:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:41.048 08:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:41.307 08:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.307 08:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:41.307 08:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:41.307 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.307 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.307 08:04:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:41.307 08:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:41.307 08:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:41.307 08:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.307 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.307 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.307 08:04:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:41.307 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.307 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.307 08:04:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:41.565 08:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:41.565 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.565 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.565 08:04:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:41.565 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.565 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.565 08:04:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:41.565 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.565 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.565 08:04:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:41.565 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.565 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.565 08:04:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:41.565 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.565 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.566 08:04:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:41.566 08:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:41.824 08:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.824 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.824 08:04:52 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.824 08:04:52 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:41.824 08:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:41.824 08:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:41.824 08:04:52 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:41.824 08:04:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:41.824 08:04:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.824 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:41.824 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:41.824 08:04:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:42.082 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.082 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.082 08:04:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:42.082 08:04:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:42.082 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.082 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.082 08:04:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:42.082 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.082 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.082 08:04:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:42.082 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.082 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.082 08:04:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:42.082 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.082 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.082 08:04:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:42.082 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.082 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.082 08:04:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:42.340 08:04:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:42.340 08:04:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:42.340 08:04:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:42.340 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.340 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.340 08:04:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:42.340 08:04:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:42.340 08:04:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:42.340 08:04:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.340 08:04:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.600 08:04:53 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:42.858 08:04:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:42.858 08:04:53 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:42.858 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:42.858 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:42.858 08:04:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:42.858 08:04:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:42.858 08:04:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:42.858 08:04:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:42.858 08:04:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.117 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.117 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.117 08:04:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:43.117 08:04:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:43.117 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.117 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.117 08:04:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:43.117 08:04:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:43.117 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.117 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.117 08:04:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:43.117 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.117 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.117 08:04:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:43.117 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.118 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.118 08:04:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:43.118 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.118 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.118 08:04:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:43.376 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.376 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.376 08:04:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:43.376 08:04:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:43.376 08:04:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:43.376 08:04:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:43.376 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.376 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.376 08:04:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:43.376 08:04:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:43.376 08:04:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:43.651 08:04:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.651 08:04:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:43.651 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.651 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.651 08:04:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:43.651 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.651 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.651 08:04:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:43.651 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.651 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.651 08:04:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:43.651 08:04:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:43.651 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.651 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.651 08:04:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:43.651 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.651 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.651 08:04:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:43.910 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.910 08:04:54 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.910 08:04:54 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:43.910 08:04:54 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:43.910 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.910 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.910 08:04:55 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:43.910 08:04:55 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:43.910 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:43.910 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:43.910 08:04:55 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:43.910 08:04:55 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:43.910 08:04:55 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:44.167 08:04:55 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:44.168 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.168 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.168 08:04:55 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:44.168 08:04:55 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.168 08:04:55 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:44.168 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.168 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.168 08:04:55 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:44.168 08:04:55 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:44.168 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.168 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.168 08:04:55 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:44.168 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.168 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.168 08:04:55 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:44.168 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.168 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.168 08:04:55 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:44.425 08:04:55 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:44.425 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.425 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.425 08:04:55 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:44.425 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.425 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.425 08:04:55 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:44.425 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.425 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.425 08:04:55 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:44.425 08:04:55 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:44.426 08:04:55 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:44.426 08:04:55 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:44.684 08:04:55 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:44.684 08:04:55 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:44.684 08:04:55 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.684 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.684 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.684 08:04:55 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:44.684 08:04:55 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:44.684 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.684 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.684 08:04:55 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:44.684 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.684 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.684 08:04:55 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:44.942 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.942 08:04:55 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.942 08:04:55 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:44.942 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.942 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.942 08:04:56 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:44.942 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.942 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.942 08:04:56 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:44.942 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.942 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.942 08:04:56 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:44.942 08:04:56 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:44.942 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:44.942 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:44.942 08:04:56 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:45.201 08:04:56 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:45.201 08:04:56 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:45.201 08:04:56 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:45.201 08:04:56 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:45.201 08:04:56 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:45.201 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.201 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.201 08:04:56 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.201 08:04:56 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:45.201 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.201 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.460 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.460 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.460 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.460 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.460 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.460 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.460 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.460 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.460 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.460 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.460 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:45.460 08:04:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:45.460 08:04:56 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:45.460 08:04:56 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:45.460 08:04:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:45.460 08:04:56 -- nvmf/common.sh@116 -- # sync 00:13:45.460 08:04:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:45.460 08:04:56 -- nvmf/common.sh@119 -- # set +e 00:13:45.460 08:04:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:45.460 08:04:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:45.460 rmmod nvme_tcp 00:13:45.460 rmmod nvme_fabrics 00:13:45.718 rmmod nvme_keyring 00:13:45.718 08:04:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:45.718 08:04:56 -- nvmf/common.sh@123 -- # set -e 00:13:45.718 08:04:56 -- nvmf/common.sh@124 -- # return 0 00:13:45.718 08:04:56 -- nvmf/common.sh@477 -- # '[' -n 79359 ']' 00:13:45.718 08:04:56 -- nvmf/common.sh@478 -- # killprocess 79359 00:13:45.718 08:04:56 -- common/autotest_common.sh@936 -- # '[' -z 79359 ']' 00:13:45.718 08:04:56 -- common/autotest_common.sh@940 -- # kill -0 79359 00:13:45.718 08:04:56 -- common/autotest_common.sh@941 -- # uname 00:13:45.718 08:04:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:45.718 08:04:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79359 00:13:45.718 08:04:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:45.718 08:04:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:45.718 killing process with pid 79359 00:13:45.718 08:04:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79359' 00:13:45.718 08:04:56 -- common/autotest_common.sh@955 -- # kill 79359 00:13:45.718 08:04:56 -- common/autotest_common.sh@960 -- # wait 79359 00:13:45.718 08:04:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:45.718 08:04:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:45.718 08:04:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:45.718 08:04:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:45.718 08:04:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:45.718 08:04:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.718 08:04:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.718 08:04:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.977 08:04:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:45.977 00:13:45.977 real 0m43.408s 00:13:45.977 user 3m28.749s 00:13:45.977 sys 0m12.622s 00:13:45.977 08:04:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:45.977 08:04:57 -- common/autotest_common.sh@10 -- # set +x 00:13:45.977 ************************************ 00:13:45.977 END TEST nvmf_ns_hotplug_stress 00:13:45.977 ************************************ 00:13:45.977 08:04:57 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:45.978 08:04:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:45.978 08:04:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:45.978 08:04:57 -- common/autotest_common.sh@10 -- # set +x 00:13:45.978 ************************************ 00:13:45.978 START TEST nvmf_connect_stress 00:13:45.978 ************************************ 00:13:45.978 08:04:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:45.978 * Looking for test storage... 00:13:45.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:45.978 08:04:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:45.978 08:04:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:45.978 08:04:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:45.978 08:04:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:45.978 08:04:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:45.978 08:04:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:45.978 08:04:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:45.978 08:04:57 -- scripts/common.sh@335 -- # IFS=.-: 00:13:45.978 08:04:57 -- scripts/common.sh@335 -- # read -ra ver1 00:13:45.978 08:04:57 -- scripts/common.sh@336 -- # IFS=.-: 00:13:45.978 08:04:57 -- scripts/common.sh@336 -- # read -ra ver2 00:13:45.978 08:04:57 -- scripts/common.sh@337 -- # local 'op=<' 00:13:45.978 08:04:57 -- scripts/common.sh@339 -- # ver1_l=2 00:13:45.978 08:04:57 -- scripts/common.sh@340 -- # ver2_l=1 00:13:45.978 08:04:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:45.978 08:04:57 -- scripts/common.sh@343 -- # case "$op" in 00:13:45.978 08:04:57 -- scripts/common.sh@344 -- # : 1 00:13:45.978 08:04:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:45.978 08:04:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:45.978 08:04:57 -- scripts/common.sh@364 -- # decimal 1 00:13:45.978 08:04:57 -- scripts/common.sh@352 -- # local d=1 00:13:45.978 08:04:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:45.978 08:04:57 -- scripts/common.sh@354 -- # echo 1 00:13:45.978 08:04:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:45.978 08:04:57 -- scripts/common.sh@365 -- # decimal 2 00:13:45.978 08:04:57 -- scripts/common.sh@352 -- # local d=2 00:13:45.978 08:04:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:45.978 08:04:57 -- scripts/common.sh@354 -- # echo 2 00:13:45.978 08:04:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:45.978 08:04:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:45.978 08:04:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:45.978 08:04:57 -- scripts/common.sh@367 -- # return 0 00:13:45.978 08:04:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:45.978 08:04:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:45.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.978 --rc genhtml_branch_coverage=1 00:13:45.978 --rc genhtml_function_coverage=1 00:13:45.978 --rc genhtml_legend=1 00:13:45.978 --rc geninfo_all_blocks=1 00:13:45.978 --rc geninfo_unexecuted_blocks=1 00:13:45.978 00:13:45.978 ' 00:13:45.978 08:04:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:45.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.978 --rc genhtml_branch_coverage=1 00:13:45.978 --rc genhtml_function_coverage=1 00:13:45.978 --rc genhtml_legend=1 00:13:45.978 --rc geninfo_all_blocks=1 00:13:45.978 --rc geninfo_unexecuted_blocks=1 00:13:45.978 00:13:45.978 ' 00:13:45.978 08:04:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:45.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.978 --rc genhtml_branch_coverage=1 00:13:45.978 --rc genhtml_function_coverage=1 00:13:45.978 --rc genhtml_legend=1 00:13:45.978 --rc geninfo_all_blocks=1 00:13:45.978 --rc geninfo_unexecuted_blocks=1 00:13:45.978 00:13:45.978 ' 00:13:45.978 08:04:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:45.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.978 --rc genhtml_branch_coverage=1 00:13:45.978 --rc genhtml_function_coverage=1 00:13:45.978 --rc genhtml_legend=1 00:13:45.978 --rc geninfo_all_blocks=1 00:13:45.978 --rc geninfo_unexecuted_blocks=1 00:13:45.978 00:13:45.978 ' 00:13:45.978 08:04:57 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:45.978 08:04:57 -- nvmf/common.sh@7 -- # uname -s 00:13:45.978 08:04:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.978 08:04:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.978 08:04:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.978 08:04:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.978 08:04:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.978 08:04:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.978 08:04:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.978 08:04:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.978 08:04:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.978 08:04:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.237 08:04:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:13:46.237 08:04:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:13:46.237 08:04:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.237 08:04:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.237 08:04:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:46.237 08:04:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:46.237 08:04:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.237 08:04:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.237 08:04:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.237 08:04:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.237 08:04:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.237 08:04:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.237 08:04:57 -- paths/export.sh@5 -- # export PATH 00:13:46.237 08:04:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.237 08:04:57 -- nvmf/common.sh@46 -- # : 0 00:13:46.237 08:04:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:46.237 08:04:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:46.237 08:04:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:46.237 08:04:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.237 08:04:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.237 08:04:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:46.237 08:04:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:46.237 08:04:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:46.237 08:04:57 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:46.237 08:04:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:46.237 08:04:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.237 08:04:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:46.237 08:04:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:46.237 08:04:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:46.237 08:04:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.237 08:04:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.237 08:04:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.237 08:04:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:46.237 08:04:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:46.237 08:04:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:46.237 08:04:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:46.237 08:04:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:46.237 08:04:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:46.237 08:04:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:46.237 08:04:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:46.237 08:04:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:46.237 08:04:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:46.237 08:04:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:46.237 08:04:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:46.237 08:04:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:46.237 08:04:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:46.237 08:04:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:46.237 08:04:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:46.237 08:04:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:46.237 08:04:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:46.237 08:04:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:46.237 08:04:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:46.237 Cannot find device "nvmf_tgt_br" 00:13:46.237 08:04:57 -- nvmf/common.sh@154 -- # true 00:13:46.237 08:04:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:46.237 Cannot find device "nvmf_tgt_br2" 00:13:46.237 08:04:57 -- nvmf/common.sh@155 -- # true 00:13:46.237 08:04:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:46.237 08:04:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:46.237 Cannot find device "nvmf_tgt_br" 00:13:46.237 08:04:57 -- nvmf/common.sh@157 -- # true 00:13:46.237 08:04:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:46.237 Cannot find device "nvmf_tgt_br2" 00:13:46.237 08:04:57 -- nvmf/common.sh@158 -- # true 00:13:46.237 08:04:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:46.237 08:04:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:46.237 08:04:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:46.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:46.237 08:04:57 -- nvmf/common.sh@161 -- # true 00:13:46.237 08:04:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:46.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:46.237 08:04:57 -- nvmf/common.sh@162 -- # true 00:13:46.237 08:04:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:46.237 08:04:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:46.237 08:04:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:46.237 08:04:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:46.237 08:04:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:46.237 08:04:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:46.237 08:04:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:46.237 08:04:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:46.237 08:04:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:46.237 08:04:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:46.237 08:04:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:46.237 08:04:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:46.237 08:04:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:46.237 08:04:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:46.238 08:04:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:46.238 08:04:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:46.238 08:04:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:46.238 08:04:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:46.497 08:04:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:46.497 08:04:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:46.497 08:04:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:46.497 08:04:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:46.497 08:04:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:46.497 08:04:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:46.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:13:46.497 00:13:46.497 --- 10.0.0.2 ping statistics --- 00:13:46.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.497 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:13:46.497 08:04:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:46.497 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:46.497 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:13:46.497 00:13:46.497 --- 10.0.0.3 ping statistics --- 00:13:46.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.497 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:13:46.497 08:04:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:46.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:13:46.497 00:13:46.497 --- 10.0.0.1 ping statistics --- 00:13:46.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.497 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:13:46.497 08:04:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.497 08:04:57 -- nvmf/common.sh@421 -- # return 0 00:13:46.497 08:04:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:46.497 08:04:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.497 08:04:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:46.497 08:04:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:46.497 08:04:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.497 08:04:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:46.497 08:04:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:46.497 08:04:57 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:46.497 08:04:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:46.497 08:04:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:46.497 08:04:57 -- common/autotest_common.sh@10 -- # set +x 00:13:46.497 08:04:57 -- nvmf/common.sh@469 -- # nvmfpid=81871 00:13:46.497 08:04:57 -- nvmf/common.sh@470 -- # waitforlisten 81871 00:13:46.497 08:04:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:46.497 08:04:57 -- common/autotest_common.sh@829 -- # '[' -z 81871 ']' 00:13:46.497 08:04:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.497 08:04:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:46.497 08:04:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.497 08:04:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:46.497 08:04:57 -- common/autotest_common.sh@10 -- # set +x 00:13:46.497 [2024-12-07 08:04:57.643712] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:46.497 [2024-12-07 08:04:57.643797] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.756 [2024-12-07 08:04:57.776866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:46.756 [2024-12-07 08:04:57.841352] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:46.756 [2024-12-07 08:04:57.841819] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.756 [2024-12-07 08:04:57.841869] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.756 [2024-12-07 08:04:57.841992] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.756 [2024-12-07 08:04:57.842184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.756 [2024-12-07 08:04:57.842615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.756 [2024-12-07 08:04:57.842625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.692 08:04:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.693 08:04:58 -- common/autotest_common.sh@862 -- # return 0 00:13:47.693 08:04:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:47.693 08:04:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:47.693 08:04:58 -- common/autotest_common.sh@10 -- # set +x 00:13:47.693 08:04:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.693 08:04:58 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:47.693 08:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.693 08:04:58 -- common/autotest_common.sh@10 -- # set +x 00:13:47.693 [2024-12-07 08:04:58.650682] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.693 08:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.693 08:04:58 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:47.693 08:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.693 08:04:58 -- common/autotest_common.sh@10 -- # set +x 00:13:47.693 08:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.693 08:04:58 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.693 08:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.693 08:04:58 -- common/autotest_common.sh@10 -- # set +x 00:13:47.693 [2024-12-07 08:04:58.668599] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.693 08:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.693 08:04:58 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:47.693 08:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.693 08:04:58 -- common/autotest_common.sh@10 -- # set +x 00:13:47.693 NULL1 00:13:47.693 08:04:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.693 08:04:58 -- target/connect_stress.sh@21 -- # PERF_PID=81923 00:13:47.693 08:04:58 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:47.693 08:04:58 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:47.693 08:04:58 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:47.693 08:04:58 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:47.693 08:04:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.693 08:04:58 -- target/connect_stress.sh@28 -- # cat 00:13:47.693 08:04:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.693 08:04:58 -- target/connect_stress.sh@28 -- # cat 00:13:47.693 08:04:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.693 08:04:58 -- target/connect_stress.sh@28 -- # cat 00:13:47.693 08:04:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.693 08:04:58 -- target/connect_stress.sh@28 -- # cat 00:13:47.693 08:04:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.693 08:04:58 -- target/connect_stress.sh@28 -- # cat 00:13:47.693 08:04:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.693 08:04:58 -- target/connect_stress.sh@28 -- # cat 00:13:47.693 08:04:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.693 08:04:58 -- target/connect_stress.sh@28 -- # cat 00:13:47.693 08:04:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.693 08:04:58 -- target/connect_stress.sh@28 -- # cat 00:13:47.693 08:04:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.693 08:04:58 -- target/connect_stress.sh@28 -- # cat 00:13:47.693 08:04:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.693 08:04:58 -- target/connect_stress.sh@28 -- # cat 00:13:47.693 08:04:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.693 08:04:58 -- target/connect_stress.sh@28 -- # cat 00:13:47.693 08:04:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.693 08:04:58 -- target/connect_stress.sh@28 -- # cat 00:13:47.693 08:04:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.693 08:04:58 -- target/connect_stress.sh@28 -- # cat 00:13:47.693 08:04:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.693 08:04:58 -- target/connect_stress.sh@28 -- # cat 00:13:47.693 08:04:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.693 08:04:58 -- target/connect_stress.sh@28 -- # cat 00:13:47.693 08:04:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.693 08:04:58 -- target/connect_stress.sh@28 -- # cat 00:13:47.693 08:04:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.693 08:04:58 -- target/connect_stress.sh@28 -- # cat 00:13:47.693 08:04:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.693 08:04:58 -- target/connect_stress.sh@28 -- # cat 00:13:47.693 08:04:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.693 08:04:58 -- target/connect_stress.sh@28 -- # cat 00:13:47.693 08:04:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:47.693 08:04:58 -- target/connect_stress.sh@28 -- # cat 00:13:47.693 08:04:58 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:47.693 08:04:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.693 08:04:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.693 08:04:58 -- common/autotest_common.sh@10 -- # set +x 00:13:47.951 08:04:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.951 08:04:59 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:47.951 08:04:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.951 08:04:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.951 08:04:59 -- common/autotest_common.sh@10 -- # set +x 00:13:48.209 08:04:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.209 08:04:59 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:48.209 08:04:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.209 08:04:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.209 08:04:59 -- common/autotest_common.sh@10 -- # set +x 00:13:48.467 08:04:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.467 08:04:59 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:48.467 08:04:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.467 08:04:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.467 08:04:59 -- common/autotest_common.sh@10 -- # set +x 00:13:49.033 08:05:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.033 08:05:00 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:49.033 08:05:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.033 08:05:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.033 08:05:00 -- common/autotest_common.sh@10 -- # set +x 00:13:49.290 08:05:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.290 08:05:00 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:49.290 08:05:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.290 08:05:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.290 08:05:00 -- common/autotest_common.sh@10 -- # set +x 00:13:49.547 08:05:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.547 08:05:00 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:49.547 08:05:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.547 08:05:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.547 08:05:00 -- common/autotest_common.sh@10 -- # set +x 00:13:49.804 08:05:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.804 08:05:01 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:49.804 08:05:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.804 08:05:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.804 08:05:01 -- common/autotest_common.sh@10 -- # set +x 00:13:50.369 08:05:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.369 08:05:01 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:50.369 08:05:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.369 08:05:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.369 08:05:01 -- common/autotest_common.sh@10 -- # set +x 00:13:50.626 08:05:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.626 08:05:01 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:50.626 08:05:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.626 08:05:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.626 08:05:01 -- common/autotest_common.sh@10 -- # set +x 00:13:50.883 08:05:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.883 08:05:01 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:50.883 08:05:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.883 08:05:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.883 08:05:01 -- common/autotest_common.sh@10 -- # set +x 00:13:51.141 08:05:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.141 08:05:02 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:51.141 08:05:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.141 08:05:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.141 08:05:02 -- common/autotest_common.sh@10 -- # set +x 00:13:51.399 08:05:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.399 08:05:02 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:51.399 08:05:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.399 08:05:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.399 08:05:02 -- common/autotest_common.sh@10 -- # set +x 00:13:51.981 08:05:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.981 08:05:02 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:51.981 08:05:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.981 08:05:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.981 08:05:02 -- common/autotest_common.sh@10 -- # set +x 00:13:52.240 08:05:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.240 08:05:03 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:52.240 08:05:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.240 08:05:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.240 08:05:03 -- common/autotest_common.sh@10 -- # set +x 00:13:52.498 08:05:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.498 08:05:03 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:52.498 08:05:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.498 08:05:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.498 08:05:03 -- common/autotest_common.sh@10 -- # set +x 00:13:52.757 08:05:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.757 08:05:03 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:52.757 08:05:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.757 08:05:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.757 08:05:03 -- common/autotest_common.sh@10 -- # set +x 00:13:53.015 08:05:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.015 08:05:04 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:53.015 08:05:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.015 08:05:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.015 08:05:04 -- common/autotest_common.sh@10 -- # set +x 00:13:53.583 08:05:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.583 08:05:04 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:53.583 08:05:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.583 08:05:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.584 08:05:04 -- common/autotest_common.sh@10 -- # set +x 00:13:53.843 08:05:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.843 08:05:04 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:53.843 08:05:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.843 08:05:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.843 08:05:04 -- common/autotest_common.sh@10 -- # set +x 00:13:54.101 08:05:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.101 08:05:05 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:54.101 08:05:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.101 08:05:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.101 08:05:05 -- common/autotest_common.sh@10 -- # set +x 00:13:54.359 08:05:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.359 08:05:05 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:54.359 08:05:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.359 08:05:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.359 08:05:05 -- common/autotest_common.sh@10 -- # set +x 00:13:54.617 08:05:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.617 08:05:05 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:54.617 08:05:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.617 08:05:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.617 08:05:05 -- common/autotest_common.sh@10 -- # set +x 00:13:55.184 08:05:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.184 08:05:06 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:55.184 08:05:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.184 08:05:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.184 08:05:06 -- common/autotest_common.sh@10 -- # set +x 00:13:55.442 08:05:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.442 08:05:06 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:55.442 08:05:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.442 08:05:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.442 08:05:06 -- common/autotest_common.sh@10 -- # set +x 00:13:55.701 08:05:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.701 08:05:06 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:55.701 08:05:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.701 08:05:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.701 08:05:06 -- common/autotest_common.sh@10 -- # set +x 00:13:55.960 08:05:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.960 08:05:07 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:55.960 08:05:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.960 08:05:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.960 08:05:07 -- common/autotest_common.sh@10 -- # set +x 00:13:56.219 08:05:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.219 08:05:07 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:56.219 08:05:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.219 08:05:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.219 08:05:07 -- common/autotest_common.sh@10 -- # set +x 00:13:56.791 08:05:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.791 08:05:07 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:56.791 08:05:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.791 08:05:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.791 08:05:07 -- common/autotest_common.sh@10 -- # set +x 00:13:57.049 08:05:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.049 08:05:08 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:57.049 08:05:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.049 08:05:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.049 08:05:08 -- common/autotest_common.sh@10 -- # set +x 00:13:57.308 08:05:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.308 08:05:08 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:57.308 08:05:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.308 08:05:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.308 08:05:08 -- common/autotest_common.sh@10 -- # set +x 00:13:57.566 08:05:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.566 08:05:08 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:57.566 08:05:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.566 08:05:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.566 08:05:08 -- common/autotest_common.sh@10 -- # set +x 00:13:57.831 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:57.831 08:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.831 08:05:09 -- target/connect_stress.sh@34 -- # kill -0 81923 00:13:57.831 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (81923) - No such process 00:13:57.831 08:05:09 -- target/connect_stress.sh@38 -- # wait 81923 00:13:57.831 08:05:09 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:57.831 08:05:09 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:57.831 08:05:09 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:57.831 08:05:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:57.831 08:05:09 -- nvmf/common.sh@116 -- # sync 00:13:58.090 08:05:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:58.090 08:05:09 -- nvmf/common.sh@119 -- # set +e 00:13:58.090 08:05:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:58.090 08:05:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:58.090 rmmod nvme_tcp 00:13:58.090 rmmod nvme_fabrics 00:13:58.090 rmmod nvme_keyring 00:13:58.090 08:05:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:58.090 08:05:09 -- nvmf/common.sh@123 -- # set -e 00:13:58.090 08:05:09 -- nvmf/common.sh@124 -- # return 0 00:13:58.090 08:05:09 -- nvmf/common.sh@477 -- # '[' -n 81871 ']' 00:13:58.090 08:05:09 -- nvmf/common.sh@478 -- # killprocess 81871 00:13:58.090 08:05:09 -- common/autotest_common.sh@936 -- # '[' -z 81871 ']' 00:13:58.090 08:05:09 -- common/autotest_common.sh@940 -- # kill -0 81871 00:13:58.090 08:05:09 -- common/autotest_common.sh@941 -- # uname 00:13:58.090 08:05:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:58.090 08:05:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81871 00:13:58.090 killing process with pid 81871 00:13:58.090 08:05:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:58.090 08:05:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:58.090 08:05:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81871' 00:13:58.090 08:05:09 -- common/autotest_common.sh@955 -- # kill 81871 00:13:58.090 08:05:09 -- common/autotest_common.sh@960 -- # wait 81871 00:13:58.349 08:05:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:58.349 08:05:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:58.349 08:05:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:58.349 08:05:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:58.349 08:05:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:58.349 08:05:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.349 08:05:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.349 08:05:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.349 08:05:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:58.349 00:13:58.349 real 0m12.362s 00:13:58.349 user 0m41.366s 00:13:58.349 sys 0m3.195s 00:13:58.349 08:05:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:58.349 08:05:09 -- common/autotest_common.sh@10 -- # set +x 00:13:58.349 ************************************ 00:13:58.349 END TEST nvmf_connect_stress 00:13:58.349 ************************************ 00:13:58.349 08:05:09 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:58.349 08:05:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:58.349 08:05:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:58.349 08:05:09 -- common/autotest_common.sh@10 -- # set +x 00:13:58.349 ************************************ 00:13:58.349 START TEST nvmf_fused_ordering 00:13:58.349 ************************************ 00:13:58.349 08:05:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:58.349 * Looking for test storage... 00:13:58.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:58.349 08:05:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:58.349 08:05:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:58.349 08:05:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:58.610 08:05:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:58.610 08:05:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:58.610 08:05:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:58.610 08:05:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:58.610 08:05:09 -- scripts/common.sh@335 -- # IFS=.-: 00:13:58.610 08:05:09 -- scripts/common.sh@335 -- # read -ra ver1 00:13:58.610 08:05:09 -- scripts/common.sh@336 -- # IFS=.-: 00:13:58.610 08:05:09 -- scripts/common.sh@336 -- # read -ra ver2 00:13:58.610 08:05:09 -- scripts/common.sh@337 -- # local 'op=<' 00:13:58.610 08:05:09 -- scripts/common.sh@339 -- # ver1_l=2 00:13:58.610 08:05:09 -- scripts/common.sh@340 -- # ver2_l=1 00:13:58.610 08:05:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:58.610 08:05:09 -- scripts/common.sh@343 -- # case "$op" in 00:13:58.610 08:05:09 -- scripts/common.sh@344 -- # : 1 00:13:58.610 08:05:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:58.610 08:05:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:58.610 08:05:09 -- scripts/common.sh@364 -- # decimal 1 00:13:58.610 08:05:09 -- scripts/common.sh@352 -- # local d=1 00:13:58.610 08:05:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:58.610 08:05:09 -- scripts/common.sh@354 -- # echo 1 00:13:58.610 08:05:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:58.610 08:05:09 -- scripts/common.sh@365 -- # decimal 2 00:13:58.610 08:05:09 -- scripts/common.sh@352 -- # local d=2 00:13:58.610 08:05:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:58.610 08:05:09 -- scripts/common.sh@354 -- # echo 2 00:13:58.610 08:05:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:58.610 08:05:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:58.610 08:05:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:58.610 08:05:09 -- scripts/common.sh@367 -- # return 0 00:13:58.610 08:05:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:58.610 08:05:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:58.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.610 --rc genhtml_branch_coverage=1 00:13:58.610 --rc genhtml_function_coverage=1 00:13:58.610 --rc genhtml_legend=1 00:13:58.610 --rc geninfo_all_blocks=1 00:13:58.610 --rc geninfo_unexecuted_blocks=1 00:13:58.610 00:13:58.610 ' 00:13:58.610 08:05:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:58.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.610 --rc genhtml_branch_coverage=1 00:13:58.610 --rc genhtml_function_coverage=1 00:13:58.610 --rc genhtml_legend=1 00:13:58.610 --rc geninfo_all_blocks=1 00:13:58.610 --rc geninfo_unexecuted_blocks=1 00:13:58.610 00:13:58.610 ' 00:13:58.610 08:05:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:58.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.610 --rc genhtml_branch_coverage=1 00:13:58.610 --rc genhtml_function_coverage=1 00:13:58.610 --rc genhtml_legend=1 00:13:58.610 --rc geninfo_all_blocks=1 00:13:58.610 --rc geninfo_unexecuted_blocks=1 00:13:58.610 00:13:58.610 ' 00:13:58.610 08:05:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:58.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.610 --rc genhtml_branch_coverage=1 00:13:58.610 --rc genhtml_function_coverage=1 00:13:58.610 --rc genhtml_legend=1 00:13:58.610 --rc geninfo_all_blocks=1 00:13:58.610 --rc geninfo_unexecuted_blocks=1 00:13:58.610 00:13:58.610 ' 00:13:58.610 08:05:09 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:58.610 08:05:09 -- nvmf/common.sh@7 -- # uname -s 00:13:58.610 08:05:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.610 08:05:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.610 08:05:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.610 08:05:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.610 08:05:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.610 08:05:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.610 08:05:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.610 08:05:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.610 08:05:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.610 08:05:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.610 08:05:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:13:58.610 08:05:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:13:58.610 08:05:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.610 08:05:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.610 08:05:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:58.610 08:05:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:58.610 08:05:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.610 08:05:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.610 08:05:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.610 08:05:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.610 08:05:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.610 08:05:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.610 08:05:09 -- paths/export.sh@5 -- # export PATH 00:13:58.610 08:05:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.610 08:05:09 -- nvmf/common.sh@46 -- # : 0 00:13:58.610 08:05:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:58.610 08:05:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:58.610 08:05:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:58.610 08:05:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.610 08:05:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.610 08:05:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:58.610 08:05:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:58.610 08:05:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:58.610 08:05:09 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:58.610 08:05:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:58.610 08:05:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.610 08:05:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:58.610 08:05:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:58.610 08:05:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:58.611 08:05:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.611 08:05:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.611 08:05:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.611 08:05:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:58.611 08:05:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:58.611 08:05:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:58.611 08:05:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:58.611 08:05:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:58.611 08:05:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:58.611 08:05:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.611 08:05:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:58.611 08:05:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:58.611 08:05:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:58.611 08:05:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:58.611 08:05:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:58.611 08:05:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:58.611 08:05:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.611 08:05:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:58.611 08:05:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:58.611 08:05:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:58.611 08:05:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:58.611 08:05:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:58.611 08:05:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:58.611 Cannot find device "nvmf_tgt_br" 00:13:58.611 08:05:09 -- nvmf/common.sh@154 -- # true 00:13:58.611 08:05:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:58.611 Cannot find device "nvmf_tgt_br2" 00:13:58.611 08:05:09 -- nvmf/common.sh@155 -- # true 00:13:58.611 08:05:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:58.611 08:05:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:58.611 Cannot find device "nvmf_tgt_br" 00:13:58.611 08:05:09 -- nvmf/common.sh@157 -- # true 00:13:58.611 08:05:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:58.611 Cannot find device "nvmf_tgt_br2" 00:13:58.611 08:05:09 -- nvmf/common.sh@158 -- # true 00:13:58.611 08:05:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:58.611 08:05:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:58.611 08:05:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:58.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:58.611 08:05:09 -- nvmf/common.sh@161 -- # true 00:13:58.611 08:05:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:58.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:58.611 08:05:09 -- nvmf/common.sh@162 -- # true 00:13:58.611 08:05:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:58.611 08:05:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:58.611 08:05:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:58.611 08:05:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:58.611 08:05:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:58.611 08:05:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:58.611 08:05:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:58.611 08:05:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:58.611 08:05:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:58.611 08:05:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:58.611 08:05:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:58.611 08:05:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:58.611 08:05:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:58.611 08:05:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:58.870 08:05:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:58.870 08:05:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:58.870 08:05:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:58.870 08:05:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:58.870 08:05:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:58.870 08:05:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:58.870 08:05:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:58.870 08:05:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:58.870 08:05:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:58.870 08:05:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:58.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:13:58.870 00:13:58.870 --- 10.0.0.2 ping statistics --- 00:13:58.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.870 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:58.870 08:05:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:58.870 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:58.870 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:13:58.870 00:13:58.870 --- 10.0.0.3 ping statistics --- 00:13:58.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.870 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:13:58.870 08:05:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:58.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:58.870 00:13:58.870 --- 10.0.0.1 ping statistics --- 00:13:58.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.870 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:58.870 08:05:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.870 08:05:09 -- nvmf/common.sh@421 -- # return 0 00:13:58.870 08:05:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:58.870 08:05:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.870 08:05:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:58.870 08:05:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:58.870 08:05:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.870 08:05:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:58.870 08:05:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:58.870 08:05:09 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:58.870 08:05:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:58.870 08:05:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:58.870 08:05:09 -- common/autotest_common.sh@10 -- # set +x 00:13:58.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.870 08:05:10 -- nvmf/common.sh@469 -- # nvmfpid=82257 00:13:58.870 08:05:10 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:58.870 08:05:10 -- nvmf/common.sh@470 -- # waitforlisten 82257 00:13:58.870 08:05:10 -- common/autotest_common.sh@829 -- # '[' -z 82257 ']' 00:13:58.870 08:05:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.870 08:05:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:58.870 08:05:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.870 08:05:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:58.870 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:13:58.870 [2024-12-07 08:05:10.059933] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:58.870 [2024-12-07 08:05:10.060047] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.129 [2024-12-07 08:05:10.201995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.129 [2024-12-07 08:05:10.269432] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:59.129 [2024-12-07 08:05:10.269599] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.129 [2024-12-07 08:05:10.269615] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.129 [2024-12-07 08:05:10.269626] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.129 [2024-12-07 08:05:10.269662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.064 08:05:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:00.064 08:05:10 -- common/autotest_common.sh@862 -- # return 0 00:14:00.064 08:05:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:00.064 08:05:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:00.064 08:05:10 -- common/autotest_common.sh@10 -- # set +x 00:14:00.064 08:05:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.064 08:05:11 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:00.064 08:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.064 08:05:11 -- common/autotest_common.sh@10 -- # set +x 00:14:00.064 [2024-12-07 08:05:11.034699] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.064 08:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.064 08:05:11 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:00.064 08:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.064 08:05:11 -- common/autotest_common.sh@10 -- # set +x 00:14:00.064 08:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.064 08:05:11 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:00.064 08:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.064 08:05:11 -- common/autotest_common.sh@10 -- # set +x 00:14:00.064 [2024-12-07 08:05:11.054783] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.064 08:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.064 08:05:11 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:00.064 08:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.064 08:05:11 -- common/autotest_common.sh@10 -- # set +x 00:14:00.064 NULL1 00:14:00.064 08:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.064 08:05:11 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:00.064 08:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.064 08:05:11 -- common/autotest_common.sh@10 -- # set +x 00:14:00.064 08:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.064 08:05:11 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:00.064 08:05:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.064 08:05:11 -- common/autotest_common.sh@10 -- # set +x 00:14:00.065 08:05:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.065 08:05:11 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:00.065 [2024-12-07 08:05:11.106884] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:00.065 [2024-12-07 08:05:11.106936] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82307 ] 00:14:00.322 Attached to nqn.2016-06.io.spdk:cnode1 00:14:00.322 Namespace ID: 1 size: 1GB 00:14:00.322 fused_ordering(0) 00:14:00.322 fused_ordering(1) 00:14:00.322 fused_ordering(2) 00:14:00.322 fused_ordering(3) 00:14:00.322 fused_ordering(4) 00:14:00.322 fused_ordering(5) 00:14:00.322 fused_ordering(6) 00:14:00.322 fused_ordering(7) 00:14:00.322 fused_ordering(8) 00:14:00.322 fused_ordering(9) 00:14:00.322 fused_ordering(10) 00:14:00.322 fused_ordering(11) 00:14:00.322 fused_ordering(12) 00:14:00.322 fused_ordering(13) 00:14:00.322 fused_ordering(14) 00:14:00.322 fused_ordering(15) 00:14:00.322 fused_ordering(16) 00:14:00.322 fused_ordering(17) 00:14:00.322 fused_ordering(18) 00:14:00.322 fused_ordering(19) 00:14:00.322 fused_ordering(20) 00:14:00.322 fused_ordering(21) 00:14:00.322 fused_ordering(22) 00:14:00.322 fused_ordering(23) 00:14:00.322 fused_ordering(24) 00:14:00.322 fused_ordering(25) 00:14:00.322 fused_ordering(26) 00:14:00.322 fused_ordering(27) 00:14:00.322 fused_ordering(28) 00:14:00.322 fused_ordering(29) 00:14:00.322 fused_ordering(30) 00:14:00.322 fused_ordering(31) 00:14:00.322 fused_ordering(32) 00:14:00.322 fused_ordering(33) 00:14:00.322 fused_ordering(34) 00:14:00.322 fused_ordering(35) 00:14:00.322 fused_ordering(36) 00:14:00.322 fused_ordering(37) 00:14:00.322 fused_ordering(38) 00:14:00.322 fused_ordering(39) 00:14:00.322 fused_ordering(40) 00:14:00.322 fused_ordering(41) 00:14:00.322 fused_ordering(42) 00:14:00.322 fused_ordering(43) 00:14:00.322 fused_ordering(44) 00:14:00.322 fused_ordering(45) 00:14:00.322 fused_ordering(46) 00:14:00.322 fused_ordering(47) 00:14:00.322 fused_ordering(48) 00:14:00.322 fused_ordering(49) 00:14:00.322 fused_ordering(50) 00:14:00.322 fused_ordering(51) 00:14:00.322 fused_ordering(52) 00:14:00.322 fused_ordering(53) 00:14:00.322 fused_ordering(54) 00:14:00.322 fused_ordering(55) 00:14:00.322 fused_ordering(56) 00:14:00.322 fused_ordering(57) 00:14:00.322 fused_ordering(58) 00:14:00.322 fused_ordering(59) 00:14:00.322 fused_ordering(60) 00:14:00.322 fused_ordering(61) 00:14:00.322 fused_ordering(62) 00:14:00.322 fused_ordering(63) 00:14:00.322 fused_ordering(64) 00:14:00.322 fused_ordering(65) 00:14:00.322 fused_ordering(66) 00:14:00.322 fused_ordering(67) 00:14:00.322 fused_ordering(68) 00:14:00.322 fused_ordering(69) 00:14:00.322 fused_ordering(70) 00:14:00.322 fused_ordering(71) 00:14:00.322 fused_ordering(72) 00:14:00.323 fused_ordering(73) 00:14:00.323 fused_ordering(74) 00:14:00.323 fused_ordering(75) 00:14:00.323 fused_ordering(76) 00:14:00.323 fused_ordering(77) 00:14:00.323 fused_ordering(78) 00:14:00.323 fused_ordering(79) 00:14:00.323 fused_ordering(80) 00:14:00.323 fused_ordering(81) 00:14:00.323 fused_ordering(82) 00:14:00.323 fused_ordering(83) 00:14:00.323 fused_ordering(84) 00:14:00.323 fused_ordering(85) 00:14:00.323 fused_ordering(86) 00:14:00.323 fused_ordering(87) 00:14:00.323 fused_ordering(88) 00:14:00.323 fused_ordering(89) 00:14:00.323 fused_ordering(90) 00:14:00.323 fused_ordering(91) 00:14:00.323 fused_ordering(92) 00:14:00.323 fused_ordering(93) 00:14:00.323 fused_ordering(94) 00:14:00.323 fused_ordering(95) 00:14:00.323 fused_ordering(96) 00:14:00.323 fused_ordering(97) 00:14:00.323 fused_ordering(98) 00:14:00.323 fused_ordering(99) 00:14:00.323 fused_ordering(100) 00:14:00.323 fused_ordering(101) 00:14:00.323 fused_ordering(102) 00:14:00.323 fused_ordering(103) 00:14:00.323 fused_ordering(104) 00:14:00.323 fused_ordering(105) 00:14:00.323 fused_ordering(106) 00:14:00.323 fused_ordering(107) 00:14:00.323 fused_ordering(108) 00:14:00.323 fused_ordering(109) 00:14:00.323 fused_ordering(110) 00:14:00.323 fused_ordering(111) 00:14:00.323 fused_ordering(112) 00:14:00.323 fused_ordering(113) 00:14:00.323 fused_ordering(114) 00:14:00.323 fused_ordering(115) 00:14:00.323 fused_ordering(116) 00:14:00.323 fused_ordering(117) 00:14:00.323 fused_ordering(118) 00:14:00.323 fused_ordering(119) 00:14:00.323 fused_ordering(120) 00:14:00.323 fused_ordering(121) 00:14:00.323 fused_ordering(122) 00:14:00.323 fused_ordering(123) 00:14:00.323 fused_ordering(124) 00:14:00.323 fused_ordering(125) 00:14:00.323 fused_ordering(126) 00:14:00.323 fused_ordering(127) 00:14:00.323 fused_ordering(128) 00:14:00.323 fused_ordering(129) 00:14:00.323 fused_ordering(130) 00:14:00.323 fused_ordering(131) 00:14:00.323 fused_ordering(132) 00:14:00.323 fused_ordering(133) 00:14:00.323 fused_ordering(134) 00:14:00.323 fused_ordering(135) 00:14:00.323 fused_ordering(136) 00:14:00.323 fused_ordering(137) 00:14:00.323 fused_ordering(138) 00:14:00.323 fused_ordering(139) 00:14:00.323 fused_ordering(140) 00:14:00.323 fused_ordering(141) 00:14:00.323 fused_ordering(142) 00:14:00.323 fused_ordering(143) 00:14:00.323 fused_ordering(144) 00:14:00.323 fused_ordering(145) 00:14:00.323 fused_ordering(146) 00:14:00.323 fused_ordering(147) 00:14:00.323 fused_ordering(148) 00:14:00.323 fused_ordering(149) 00:14:00.323 fused_ordering(150) 00:14:00.323 fused_ordering(151) 00:14:00.323 fused_ordering(152) 00:14:00.323 fused_ordering(153) 00:14:00.323 fused_ordering(154) 00:14:00.323 fused_ordering(155) 00:14:00.323 fused_ordering(156) 00:14:00.323 fused_ordering(157) 00:14:00.323 fused_ordering(158) 00:14:00.323 fused_ordering(159) 00:14:00.323 fused_ordering(160) 00:14:00.323 fused_ordering(161) 00:14:00.323 fused_ordering(162) 00:14:00.323 fused_ordering(163) 00:14:00.323 fused_ordering(164) 00:14:00.323 fused_ordering(165) 00:14:00.323 fused_ordering(166) 00:14:00.323 fused_ordering(167) 00:14:00.323 fused_ordering(168) 00:14:00.323 fused_ordering(169) 00:14:00.323 fused_ordering(170) 00:14:00.323 fused_ordering(171) 00:14:00.323 fused_ordering(172) 00:14:00.323 fused_ordering(173) 00:14:00.323 fused_ordering(174) 00:14:00.323 fused_ordering(175) 00:14:00.323 fused_ordering(176) 00:14:00.323 fused_ordering(177) 00:14:00.323 fused_ordering(178) 00:14:00.323 fused_ordering(179) 00:14:00.323 fused_ordering(180) 00:14:00.323 fused_ordering(181) 00:14:00.323 fused_ordering(182) 00:14:00.323 fused_ordering(183) 00:14:00.323 fused_ordering(184) 00:14:00.323 fused_ordering(185) 00:14:00.323 fused_ordering(186) 00:14:00.323 fused_ordering(187) 00:14:00.323 fused_ordering(188) 00:14:00.323 fused_ordering(189) 00:14:00.323 fused_ordering(190) 00:14:00.323 fused_ordering(191) 00:14:00.323 fused_ordering(192) 00:14:00.323 fused_ordering(193) 00:14:00.323 fused_ordering(194) 00:14:00.323 fused_ordering(195) 00:14:00.323 fused_ordering(196) 00:14:00.323 fused_ordering(197) 00:14:00.323 fused_ordering(198) 00:14:00.323 fused_ordering(199) 00:14:00.323 fused_ordering(200) 00:14:00.323 fused_ordering(201) 00:14:00.323 fused_ordering(202) 00:14:00.323 fused_ordering(203) 00:14:00.323 fused_ordering(204) 00:14:00.323 fused_ordering(205) 00:14:00.581 fused_ordering(206) 00:14:00.581 fused_ordering(207) 00:14:00.581 fused_ordering(208) 00:14:00.581 fused_ordering(209) 00:14:00.581 fused_ordering(210) 00:14:00.581 fused_ordering(211) 00:14:00.581 fused_ordering(212) 00:14:00.581 fused_ordering(213) 00:14:00.581 fused_ordering(214) 00:14:00.581 fused_ordering(215) 00:14:00.581 fused_ordering(216) 00:14:00.581 fused_ordering(217) 00:14:00.581 fused_ordering(218) 00:14:00.581 fused_ordering(219) 00:14:00.581 fused_ordering(220) 00:14:00.581 fused_ordering(221) 00:14:00.581 fused_ordering(222) 00:14:00.581 fused_ordering(223) 00:14:00.581 fused_ordering(224) 00:14:00.581 fused_ordering(225) 00:14:00.581 fused_ordering(226) 00:14:00.581 fused_ordering(227) 00:14:00.581 fused_ordering(228) 00:14:00.581 fused_ordering(229) 00:14:00.581 fused_ordering(230) 00:14:00.581 fused_ordering(231) 00:14:00.581 fused_ordering(232) 00:14:00.581 fused_ordering(233) 00:14:00.581 fused_ordering(234) 00:14:00.581 fused_ordering(235) 00:14:00.581 fused_ordering(236) 00:14:00.581 fused_ordering(237) 00:14:00.581 fused_ordering(238) 00:14:00.581 fused_ordering(239) 00:14:00.581 fused_ordering(240) 00:14:00.581 fused_ordering(241) 00:14:00.581 fused_ordering(242) 00:14:00.581 fused_ordering(243) 00:14:00.581 fused_ordering(244) 00:14:00.581 fused_ordering(245) 00:14:00.581 fused_ordering(246) 00:14:00.581 fused_ordering(247) 00:14:00.581 fused_ordering(248) 00:14:00.581 fused_ordering(249) 00:14:00.581 fused_ordering(250) 00:14:00.581 fused_ordering(251) 00:14:00.581 fused_ordering(252) 00:14:00.581 fused_ordering(253) 00:14:00.581 fused_ordering(254) 00:14:00.581 fused_ordering(255) 00:14:00.581 fused_ordering(256) 00:14:00.581 fused_ordering(257) 00:14:00.581 fused_ordering(258) 00:14:00.581 fused_ordering(259) 00:14:00.581 fused_ordering(260) 00:14:00.581 fused_ordering(261) 00:14:00.581 fused_ordering(262) 00:14:00.581 fused_ordering(263) 00:14:00.581 fused_ordering(264) 00:14:00.581 fused_ordering(265) 00:14:00.581 fused_ordering(266) 00:14:00.581 fused_ordering(267) 00:14:00.581 fused_ordering(268) 00:14:00.581 fused_ordering(269) 00:14:00.581 fused_ordering(270) 00:14:00.581 fused_ordering(271) 00:14:00.581 fused_ordering(272) 00:14:00.581 fused_ordering(273) 00:14:00.581 fused_ordering(274) 00:14:00.581 fused_ordering(275) 00:14:00.581 fused_ordering(276) 00:14:00.581 fused_ordering(277) 00:14:00.581 fused_ordering(278) 00:14:00.581 fused_ordering(279) 00:14:00.581 fused_ordering(280) 00:14:00.581 fused_ordering(281) 00:14:00.581 fused_ordering(282) 00:14:00.581 fused_ordering(283) 00:14:00.581 fused_ordering(284) 00:14:00.581 fused_ordering(285) 00:14:00.581 fused_ordering(286) 00:14:00.581 fused_ordering(287) 00:14:00.581 fused_ordering(288) 00:14:00.581 fused_ordering(289) 00:14:00.581 fused_ordering(290) 00:14:00.581 fused_ordering(291) 00:14:00.581 fused_ordering(292) 00:14:00.581 fused_ordering(293) 00:14:00.581 fused_ordering(294) 00:14:00.581 fused_ordering(295) 00:14:00.581 fused_ordering(296) 00:14:00.581 fused_ordering(297) 00:14:00.581 fused_ordering(298) 00:14:00.581 fused_ordering(299) 00:14:00.581 fused_ordering(300) 00:14:00.581 fused_ordering(301) 00:14:00.581 fused_ordering(302) 00:14:00.581 fused_ordering(303) 00:14:00.581 fused_ordering(304) 00:14:00.581 fused_ordering(305) 00:14:00.581 fused_ordering(306) 00:14:00.581 fused_ordering(307) 00:14:00.581 fused_ordering(308) 00:14:00.581 fused_ordering(309) 00:14:00.581 fused_ordering(310) 00:14:00.581 fused_ordering(311) 00:14:00.581 fused_ordering(312) 00:14:00.581 fused_ordering(313) 00:14:00.581 fused_ordering(314) 00:14:00.581 fused_ordering(315) 00:14:00.581 fused_ordering(316) 00:14:00.581 fused_ordering(317) 00:14:00.581 fused_ordering(318) 00:14:00.581 fused_ordering(319) 00:14:00.581 fused_ordering(320) 00:14:00.581 fused_ordering(321) 00:14:00.581 fused_ordering(322) 00:14:00.581 fused_ordering(323) 00:14:00.581 fused_ordering(324) 00:14:00.581 fused_ordering(325) 00:14:00.581 fused_ordering(326) 00:14:00.581 fused_ordering(327) 00:14:00.581 fused_ordering(328) 00:14:00.581 fused_ordering(329) 00:14:00.581 fused_ordering(330) 00:14:00.581 fused_ordering(331) 00:14:00.581 fused_ordering(332) 00:14:00.581 fused_ordering(333) 00:14:00.581 fused_ordering(334) 00:14:00.581 fused_ordering(335) 00:14:00.581 fused_ordering(336) 00:14:00.581 fused_ordering(337) 00:14:00.581 fused_ordering(338) 00:14:00.581 fused_ordering(339) 00:14:00.581 fused_ordering(340) 00:14:00.581 fused_ordering(341) 00:14:00.581 fused_ordering(342) 00:14:00.581 fused_ordering(343) 00:14:00.581 fused_ordering(344) 00:14:00.581 fused_ordering(345) 00:14:00.581 fused_ordering(346) 00:14:00.581 fused_ordering(347) 00:14:00.581 fused_ordering(348) 00:14:00.581 fused_ordering(349) 00:14:00.581 fused_ordering(350) 00:14:00.581 fused_ordering(351) 00:14:00.581 fused_ordering(352) 00:14:00.581 fused_ordering(353) 00:14:00.581 fused_ordering(354) 00:14:00.581 fused_ordering(355) 00:14:00.581 fused_ordering(356) 00:14:00.581 fused_ordering(357) 00:14:00.581 fused_ordering(358) 00:14:00.581 fused_ordering(359) 00:14:00.581 fused_ordering(360) 00:14:00.581 fused_ordering(361) 00:14:00.581 fused_ordering(362) 00:14:00.581 fused_ordering(363) 00:14:00.581 fused_ordering(364) 00:14:00.581 fused_ordering(365) 00:14:00.581 fused_ordering(366) 00:14:00.581 fused_ordering(367) 00:14:00.581 fused_ordering(368) 00:14:00.581 fused_ordering(369) 00:14:00.581 fused_ordering(370) 00:14:00.581 fused_ordering(371) 00:14:00.581 fused_ordering(372) 00:14:00.581 fused_ordering(373) 00:14:00.581 fused_ordering(374) 00:14:00.581 fused_ordering(375) 00:14:00.581 fused_ordering(376) 00:14:00.581 fused_ordering(377) 00:14:00.581 fused_ordering(378) 00:14:00.581 fused_ordering(379) 00:14:00.581 fused_ordering(380) 00:14:00.581 fused_ordering(381) 00:14:00.581 fused_ordering(382) 00:14:00.581 fused_ordering(383) 00:14:00.581 fused_ordering(384) 00:14:00.581 fused_ordering(385) 00:14:00.581 fused_ordering(386) 00:14:00.581 fused_ordering(387) 00:14:00.581 fused_ordering(388) 00:14:00.581 fused_ordering(389) 00:14:00.581 fused_ordering(390) 00:14:00.581 fused_ordering(391) 00:14:00.581 fused_ordering(392) 00:14:00.581 fused_ordering(393) 00:14:00.582 fused_ordering(394) 00:14:00.582 fused_ordering(395) 00:14:00.582 fused_ordering(396) 00:14:00.582 fused_ordering(397) 00:14:00.582 fused_ordering(398) 00:14:00.582 fused_ordering(399) 00:14:00.582 fused_ordering(400) 00:14:00.582 fused_ordering(401) 00:14:00.582 fused_ordering(402) 00:14:00.582 fused_ordering(403) 00:14:00.582 fused_ordering(404) 00:14:00.582 fused_ordering(405) 00:14:00.582 fused_ordering(406) 00:14:00.582 fused_ordering(407) 00:14:00.582 fused_ordering(408) 00:14:00.582 fused_ordering(409) 00:14:00.582 fused_ordering(410) 00:14:00.840 fused_ordering(411) 00:14:00.840 fused_ordering(412) 00:14:00.840 fused_ordering(413) 00:14:00.840 fused_ordering(414) 00:14:00.840 fused_ordering(415) 00:14:00.840 fused_ordering(416) 00:14:00.840 fused_ordering(417) 00:14:00.840 fused_ordering(418) 00:14:00.840 fused_ordering(419) 00:14:00.840 fused_ordering(420) 00:14:00.840 fused_ordering(421) 00:14:00.840 fused_ordering(422) 00:14:00.840 fused_ordering(423) 00:14:00.840 fused_ordering(424) 00:14:00.840 fused_ordering(425) 00:14:00.840 fused_ordering(426) 00:14:00.840 fused_ordering(427) 00:14:00.840 fused_ordering(428) 00:14:00.840 fused_ordering(429) 00:14:00.840 fused_ordering(430) 00:14:00.840 fused_ordering(431) 00:14:00.840 fused_ordering(432) 00:14:00.840 fused_ordering(433) 00:14:00.840 fused_ordering(434) 00:14:00.840 fused_ordering(435) 00:14:00.840 fused_ordering(436) 00:14:00.840 fused_ordering(437) 00:14:00.840 fused_ordering(438) 00:14:00.840 fused_ordering(439) 00:14:00.840 fused_ordering(440) 00:14:00.840 fused_ordering(441) 00:14:00.840 fused_ordering(442) 00:14:00.840 fused_ordering(443) 00:14:00.840 fused_ordering(444) 00:14:00.840 fused_ordering(445) 00:14:00.840 fused_ordering(446) 00:14:00.840 fused_ordering(447) 00:14:00.840 fused_ordering(448) 00:14:00.840 fused_ordering(449) 00:14:00.840 fused_ordering(450) 00:14:00.840 fused_ordering(451) 00:14:00.840 fused_ordering(452) 00:14:00.840 fused_ordering(453) 00:14:00.840 fused_ordering(454) 00:14:00.840 fused_ordering(455) 00:14:00.840 fused_ordering(456) 00:14:00.840 fused_ordering(457) 00:14:00.840 fused_ordering(458) 00:14:00.840 fused_ordering(459) 00:14:00.840 fused_ordering(460) 00:14:00.840 fused_ordering(461) 00:14:00.840 fused_ordering(462) 00:14:00.840 fused_ordering(463) 00:14:00.840 fused_ordering(464) 00:14:00.840 fused_ordering(465) 00:14:00.840 fused_ordering(466) 00:14:00.840 fused_ordering(467) 00:14:00.840 fused_ordering(468) 00:14:00.840 fused_ordering(469) 00:14:00.840 fused_ordering(470) 00:14:00.840 fused_ordering(471) 00:14:00.840 fused_ordering(472) 00:14:00.840 fused_ordering(473) 00:14:00.840 fused_ordering(474) 00:14:00.840 fused_ordering(475) 00:14:00.840 fused_ordering(476) 00:14:00.840 fused_ordering(477) 00:14:00.840 fused_ordering(478) 00:14:00.840 fused_ordering(479) 00:14:00.840 fused_ordering(480) 00:14:00.840 fused_ordering(481) 00:14:00.840 fused_ordering(482) 00:14:00.840 fused_ordering(483) 00:14:00.840 fused_ordering(484) 00:14:00.840 fused_ordering(485) 00:14:00.840 fused_ordering(486) 00:14:00.840 fused_ordering(487) 00:14:00.840 fused_ordering(488) 00:14:00.840 fused_ordering(489) 00:14:00.840 fused_ordering(490) 00:14:00.840 fused_ordering(491) 00:14:00.840 fused_ordering(492) 00:14:00.840 fused_ordering(493) 00:14:00.840 fused_ordering(494) 00:14:00.840 fused_ordering(495) 00:14:00.840 fused_ordering(496) 00:14:00.840 fused_ordering(497) 00:14:00.840 fused_ordering(498) 00:14:00.840 fused_ordering(499) 00:14:00.840 fused_ordering(500) 00:14:00.840 fused_ordering(501) 00:14:00.840 fused_ordering(502) 00:14:00.840 fused_ordering(503) 00:14:00.840 fused_ordering(504) 00:14:00.840 fused_ordering(505) 00:14:00.840 fused_ordering(506) 00:14:00.840 fused_ordering(507) 00:14:00.840 fused_ordering(508) 00:14:00.840 fused_ordering(509) 00:14:00.840 fused_ordering(510) 00:14:00.840 fused_ordering(511) 00:14:00.840 fused_ordering(512) 00:14:00.840 fused_ordering(513) 00:14:00.840 fused_ordering(514) 00:14:00.840 fused_ordering(515) 00:14:00.840 fused_ordering(516) 00:14:00.840 fused_ordering(517) 00:14:00.840 fused_ordering(518) 00:14:00.840 fused_ordering(519) 00:14:00.840 fused_ordering(520) 00:14:00.840 fused_ordering(521) 00:14:00.840 fused_ordering(522) 00:14:00.840 fused_ordering(523) 00:14:00.840 fused_ordering(524) 00:14:00.840 fused_ordering(525) 00:14:00.840 fused_ordering(526) 00:14:00.840 fused_ordering(527) 00:14:00.840 fused_ordering(528) 00:14:00.841 fused_ordering(529) 00:14:00.841 fused_ordering(530) 00:14:00.841 fused_ordering(531) 00:14:00.841 fused_ordering(532) 00:14:00.841 fused_ordering(533) 00:14:00.841 fused_ordering(534) 00:14:00.841 fused_ordering(535) 00:14:00.841 fused_ordering(536) 00:14:00.841 fused_ordering(537) 00:14:00.841 fused_ordering(538) 00:14:00.841 fused_ordering(539) 00:14:00.841 fused_ordering(540) 00:14:00.841 fused_ordering(541) 00:14:00.841 fused_ordering(542) 00:14:00.841 fused_ordering(543) 00:14:00.841 fused_ordering(544) 00:14:00.841 fused_ordering(545) 00:14:00.841 fused_ordering(546) 00:14:00.841 fused_ordering(547) 00:14:00.841 fused_ordering(548) 00:14:00.841 fused_ordering(549) 00:14:00.841 fused_ordering(550) 00:14:00.841 fused_ordering(551) 00:14:00.841 fused_ordering(552) 00:14:00.841 fused_ordering(553) 00:14:00.841 fused_ordering(554) 00:14:00.841 fused_ordering(555) 00:14:00.841 fused_ordering(556) 00:14:00.841 fused_ordering(557) 00:14:00.841 fused_ordering(558) 00:14:00.841 fused_ordering(559) 00:14:00.841 fused_ordering(560) 00:14:00.841 fused_ordering(561) 00:14:00.841 fused_ordering(562) 00:14:00.841 fused_ordering(563) 00:14:00.841 fused_ordering(564) 00:14:00.841 fused_ordering(565) 00:14:00.841 fused_ordering(566) 00:14:00.841 fused_ordering(567) 00:14:00.841 fused_ordering(568) 00:14:00.841 fused_ordering(569) 00:14:00.841 fused_ordering(570) 00:14:00.841 fused_ordering(571) 00:14:00.841 fused_ordering(572) 00:14:00.841 fused_ordering(573) 00:14:00.841 fused_ordering(574) 00:14:00.841 fused_ordering(575) 00:14:00.841 fused_ordering(576) 00:14:00.841 fused_ordering(577) 00:14:00.841 fused_ordering(578) 00:14:00.841 fused_ordering(579) 00:14:00.841 fused_ordering(580) 00:14:00.841 fused_ordering(581) 00:14:00.841 fused_ordering(582) 00:14:00.841 fused_ordering(583) 00:14:00.841 fused_ordering(584) 00:14:00.841 fused_ordering(585) 00:14:00.841 fused_ordering(586) 00:14:00.841 fused_ordering(587) 00:14:00.841 fused_ordering(588) 00:14:00.841 fused_ordering(589) 00:14:00.841 fused_ordering(590) 00:14:00.841 fused_ordering(591) 00:14:00.841 fused_ordering(592) 00:14:00.841 fused_ordering(593) 00:14:00.841 fused_ordering(594) 00:14:00.841 fused_ordering(595) 00:14:00.841 fused_ordering(596) 00:14:00.841 fused_ordering(597) 00:14:00.841 fused_ordering(598) 00:14:00.841 fused_ordering(599) 00:14:00.841 fused_ordering(600) 00:14:00.841 fused_ordering(601) 00:14:00.841 fused_ordering(602) 00:14:00.841 fused_ordering(603) 00:14:00.841 fused_ordering(604) 00:14:00.841 fused_ordering(605) 00:14:00.841 fused_ordering(606) 00:14:00.841 fused_ordering(607) 00:14:00.841 fused_ordering(608) 00:14:00.841 fused_ordering(609) 00:14:00.841 fused_ordering(610) 00:14:00.841 fused_ordering(611) 00:14:00.841 fused_ordering(612) 00:14:00.841 fused_ordering(613) 00:14:00.841 fused_ordering(614) 00:14:00.841 fused_ordering(615) 00:14:01.406 fused_ordering(616) 00:14:01.406 fused_ordering(617) 00:14:01.406 fused_ordering(618) 00:14:01.406 fused_ordering(619) 00:14:01.406 fused_ordering(620) 00:14:01.406 fused_ordering(621) 00:14:01.406 fused_ordering(622) 00:14:01.406 fused_ordering(623) 00:14:01.406 fused_ordering(624) 00:14:01.406 fused_ordering(625) 00:14:01.406 fused_ordering(626) 00:14:01.406 fused_ordering(627) 00:14:01.406 fused_ordering(628) 00:14:01.406 fused_ordering(629) 00:14:01.406 fused_ordering(630) 00:14:01.406 fused_ordering(631) 00:14:01.406 fused_ordering(632) 00:14:01.406 fused_ordering(633) 00:14:01.406 fused_ordering(634) 00:14:01.406 fused_ordering(635) 00:14:01.406 fused_ordering(636) 00:14:01.406 fused_ordering(637) 00:14:01.406 fused_ordering(638) 00:14:01.406 fused_ordering(639) 00:14:01.406 fused_ordering(640) 00:14:01.406 fused_ordering(641) 00:14:01.406 fused_ordering(642) 00:14:01.406 fused_ordering(643) 00:14:01.406 fused_ordering(644) 00:14:01.406 fused_ordering(645) 00:14:01.406 fused_ordering(646) 00:14:01.406 fused_ordering(647) 00:14:01.406 fused_ordering(648) 00:14:01.406 fused_ordering(649) 00:14:01.406 fused_ordering(650) 00:14:01.406 fused_ordering(651) 00:14:01.406 fused_ordering(652) 00:14:01.406 fused_ordering(653) 00:14:01.406 fused_ordering(654) 00:14:01.406 fused_ordering(655) 00:14:01.406 fused_ordering(656) 00:14:01.406 fused_ordering(657) 00:14:01.406 fused_ordering(658) 00:14:01.406 fused_ordering(659) 00:14:01.406 fused_ordering(660) 00:14:01.406 fused_ordering(661) 00:14:01.406 fused_ordering(662) 00:14:01.406 fused_ordering(663) 00:14:01.406 fused_ordering(664) 00:14:01.406 fused_ordering(665) 00:14:01.406 fused_ordering(666) 00:14:01.406 fused_ordering(667) 00:14:01.406 fused_ordering(668) 00:14:01.406 fused_ordering(669) 00:14:01.406 fused_ordering(670) 00:14:01.406 fused_ordering(671) 00:14:01.406 fused_ordering(672) 00:14:01.406 fused_ordering(673) 00:14:01.406 fused_ordering(674) 00:14:01.406 fused_ordering(675) 00:14:01.406 fused_ordering(676) 00:14:01.406 fused_ordering(677) 00:14:01.406 fused_ordering(678) 00:14:01.406 fused_ordering(679) 00:14:01.406 fused_ordering(680) 00:14:01.406 fused_ordering(681) 00:14:01.406 fused_ordering(682) 00:14:01.406 fused_ordering(683) 00:14:01.406 fused_ordering(684) 00:14:01.406 fused_ordering(685) 00:14:01.406 fused_ordering(686) 00:14:01.406 fused_ordering(687) 00:14:01.406 fused_ordering(688) 00:14:01.406 fused_ordering(689) 00:14:01.406 fused_ordering(690) 00:14:01.406 fused_ordering(691) 00:14:01.407 fused_ordering(692) 00:14:01.407 fused_ordering(693) 00:14:01.407 fused_ordering(694) 00:14:01.407 fused_ordering(695) 00:14:01.407 fused_ordering(696) 00:14:01.407 fused_ordering(697) 00:14:01.407 fused_ordering(698) 00:14:01.407 fused_ordering(699) 00:14:01.407 fused_ordering(700) 00:14:01.407 fused_ordering(701) 00:14:01.407 fused_ordering(702) 00:14:01.407 fused_ordering(703) 00:14:01.407 fused_ordering(704) 00:14:01.407 fused_ordering(705) 00:14:01.407 fused_ordering(706) 00:14:01.407 fused_ordering(707) 00:14:01.407 fused_ordering(708) 00:14:01.407 fused_ordering(709) 00:14:01.407 fused_ordering(710) 00:14:01.407 fused_ordering(711) 00:14:01.407 fused_ordering(712) 00:14:01.407 fused_ordering(713) 00:14:01.407 fused_ordering(714) 00:14:01.407 fused_ordering(715) 00:14:01.407 fused_ordering(716) 00:14:01.407 fused_ordering(717) 00:14:01.407 fused_ordering(718) 00:14:01.407 fused_ordering(719) 00:14:01.407 fused_ordering(720) 00:14:01.407 fused_ordering(721) 00:14:01.407 fused_ordering(722) 00:14:01.407 fused_ordering(723) 00:14:01.407 fused_ordering(724) 00:14:01.407 fused_ordering(725) 00:14:01.407 fused_ordering(726) 00:14:01.407 fused_ordering(727) 00:14:01.407 fused_ordering(728) 00:14:01.407 fused_ordering(729) 00:14:01.407 fused_ordering(730) 00:14:01.407 fused_ordering(731) 00:14:01.407 fused_ordering(732) 00:14:01.407 fused_ordering(733) 00:14:01.407 fused_ordering(734) 00:14:01.407 fused_ordering(735) 00:14:01.407 fused_ordering(736) 00:14:01.407 fused_ordering(737) 00:14:01.407 fused_ordering(738) 00:14:01.407 fused_ordering(739) 00:14:01.407 fused_ordering(740) 00:14:01.407 fused_ordering(741) 00:14:01.407 fused_ordering(742) 00:14:01.407 fused_ordering(743) 00:14:01.407 fused_ordering(744) 00:14:01.407 fused_ordering(745) 00:14:01.407 fused_ordering(746) 00:14:01.407 fused_ordering(747) 00:14:01.407 fused_ordering(748) 00:14:01.407 fused_ordering(749) 00:14:01.407 fused_ordering(750) 00:14:01.407 fused_ordering(751) 00:14:01.407 fused_ordering(752) 00:14:01.407 fused_ordering(753) 00:14:01.407 fused_ordering(754) 00:14:01.407 fused_ordering(755) 00:14:01.407 fused_ordering(756) 00:14:01.407 fused_ordering(757) 00:14:01.407 fused_ordering(758) 00:14:01.407 fused_ordering(759) 00:14:01.407 fused_ordering(760) 00:14:01.407 fused_ordering(761) 00:14:01.407 fused_ordering(762) 00:14:01.407 fused_ordering(763) 00:14:01.407 fused_ordering(764) 00:14:01.407 fused_ordering(765) 00:14:01.407 fused_ordering(766) 00:14:01.407 fused_ordering(767) 00:14:01.407 fused_ordering(768) 00:14:01.407 fused_ordering(769) 00:14:01.407 fused_ordering(770) 00:14:01.407 fused_ordering(771) 00:14:01.407 fused_ordering(772) 00:14:01.407 fused_ordering(773) 00:14:01.407 fused_ordering(774) 00:14:01.407 fused_ordering(775) 00:14:01.407 fused_ordering(776) 00:14:01.407 fused_ordering(777) 00:14:01.407 fused_ordering(778) 00:14:01.407 fused_ordering(779) 00:14:01.407 fused_ordering(780) 00:14:01.407 fused_ordering(781) 00:14:01.407 fused_ordering(782) 00:14:01.407 fused_ordering(783) 00:14:01.407 fused_ordering(784) 00:14:01.407 fused_ordering(785) 00:14:01.407 fused_ordering(786) 00:14:01.407 fused_ordering(787) 00:14:01.407 fused_ordering(788) 00:14:01.407 fused_ordering(789) 00:14:01.407 fused_ordering(790) 00:14:01.407 fused_ordering(791) 00:14:01.407 fused_ordering(792) 00:14:01.407 fused_ordering(793) 00:14:01.407 fused_ordering(794) 00:14:01.407 fused_ordering(795) 00:14:01.407 fused_ordering(796) 00:14:01.407 fused_ordering(797) 00:14:01.407 fused_ordering(798) 00:14:01.407 fused_ordering(799) 00:14:01.407 fused_ordering(800) 00:14:01.407 fused_ordering(801) 00:14:01.407 fused_ordering(802) 00:14:01.407 fused_ordering(803) 00:14:01.407 fused_ordering(804) 00:14:01.407 fused_ordering(805) 00:14:01.407 fused_ordering(806) 00:14:01.407 fused_ordering(807) 00:14:01.407 fused_ordering(808) 00:14:01.407 fused_ordering(809) 00:14:01.407 fused_ordering(810) 00:14:01.407 fused_ordering(811) 00:14:01.407 fused_ordering(812) 00:14:01.407 fused_ordering(813) 00:14:01.407 fused_ordering(814) 00:14:01.407 fused_ordering(815) 00:14:01.407 fused_ordering(816) 00:14:01.407 fused_ordering(817) 00:14:01.407 fused_ordering(818) 00:14:01.407 fused_ordering(819) 00:14:01.407 fused_ordering(820) 00:14:01.665 fused_ordering(821) 00:14:01.665 fused_ordering(822) 00:14:01.665 fused_ordering(823) 00:14:01.665 fused_ordering(824) 00:14:01.665 fused_ordering(825) 00:14:01.665 fused_ordering(826) 00:14:01.665 fused_ordering(827) 00:14:01.665 fused_ordering(828) 00:14:01.665 fused_ordering(829) 00:14:01.665 fused_ordering(830) 00:14:01.665 fused_ordering(831) 00:14:01.665 fused_ordering(832) 00:14:01.665 fused_ordering(833) 00:14:01.665 fused_ordering(834) 00:14:01.665 fused_ordering(835) 00:14:01.665 fused_ordering(836) 00:14:01.665 fused_ordering(837) 00:14:01.665 fused_ordering(838) 00:14:01.665 fused_ordering(839) 00:14:01.665 fused_ordering(840) 00:14:01.665 fused_ordering(841) 00:14:01.665 fused_ordering(842) 00:14:01.665 fused_ordering(843) 00:14:01.665 fused_ordering(844) 00:14:01.665 fused_ordering(845) 00:14:01.665 fused_ordering(846) 00:14:01.665 fused_ordering(847) 00:14:01.665 fused_ordering(848) 00:14:01.665 fused_ordering(849) 00:14:01.665 fused_ordering(850) 00:14:01.665 fused_ordering(851) 00:14:01.665 fused_ordering(852) 00:14:01.665 fused_ordering(853) 00:14:01.665 fused_ordering(854) 00:14:01.665 fused_ordering(855) 00:14:01.665 fused_ordering(856) 00:14:01.665 fused_ordering(857) 00:14:01.665 fused_ordering(858) 00:14:01.665 fused_ordering(859) 00:14:01.665 fused_ordering(860) 00:14:01.665 fused_ordering(861) 00:14:01.665 fused_ordering(862) 00:14:01.665 fused_ordering(863) 00:14:01.665 fused_ordering(864) 00:14:01.665 fused_ordering(865) 00:14:01.665 fused_ordering(866) 00:14:01.665 fused_ordering(867) 00:14:01.665 fused_ordering(868) 00:14:01.665 fused_ordering(869) 00:14:01.665 fused_ordering(870) 00:14:01.665 fused_ordering(871) 00:14:01.665 fused_ordering(872) 00:14:01.665 fused_ordering(873) 00:14:01.665 fused_ordering(874) 00:14:01.665 fused_ordering(875) 00:14:01.665 fused_ordering(876) 00:14:01.665 fused_ordering(877) 00:14:01.665 fused_ordering(878) 00:14:01.666 fused_ordering(879) 00:14:01.666 fused_ordering(880) 00:14:01.666 fused_ordering(881) 00:14:01.666 fused_ordering(882) 00:14:01.666 fused_ordering(883) 00:14:01.666 fused_ordering(884) 00:14:01.666 fused_ordering(885) 00:14:01.666 fused_ordering(886) 00:14:01.666 fused_ordering(887) 00:14:01.666 fused_ordering(888) 00:14:01.666 fused_ordering(889) 00:14:01.666 fused_ordering(890) 00:14:01.666 fused_ordering(891) 00:14:01.666 fused_ordering(892) 00:14:01.666 fused_ordering(893) 00:14:01.666 fused_ordering(894) 00:14:01.666 fused_ordering(895) 00:14:01.666 fused_ordering(896) 00:14:01.666 fused_ordering(897) 00:14:01.666 fused_ordering(898) 00:14:01.666 fused_ordering(899) 00:14:01.666 fused_ordering(900) 00:14:01.666 fused_ordering(901) 00:14:01.666 fused_ordering(902) 00:14:01.666 fused_ordering(903) 00:14:01.666 fused_ordering(904) 00:14:01.666 fused_ordering(905) 00:14:01.666 fused_ordering(906) 00:14:01.666 fused_ordering(907) 00:14:01.666 fused_ordering(908) 00:14:01.666 fused_ordering(909) 00:14:01.666 fused_ordering(910) 00:14:01.666 fused_ordering(911) 00:14:01.666 fused_ordering(912) 00:14:01.666 fused_ordering(913) 00:14:01.666 fused_ordering(914) 00:14:01.666 fused_ordering(915) 00:14:01.666 fused_ordering(916) 00:14:01.666 fused_ordering(917) 00:14:01.666 fused_ordering(918) 00:14:01.666 fused_ordering(919) 00:14:01.666 fused_ordering(920) 00:14:01.666 fused_ordering(921) 00:14:01.666 fused_ordering(922) 00:14:01.666 fused_ordering(923) 00:14:01.666 fused_ordering(924) 00:14:01.666 fused_ordering(925) 00:14:01.666 fused_ordering(926) 00:14:01.666 fused_ordering(927) 00:14:01.666 fused_ordering(928) 00:14:01.666 fused_ordering(929) 00:14:01.666 fused_ordering(930) 00:14:01.666 fused_ordering(931) 00:14:01.666 fused_ordering(932) 00:14:01.666 fused_ordering(933) 00:14:01.666 fused_ordering(934) 00:14:01.666 fused_ordering(935) 00:14:01.666 fused_ordering(936) 00:14:01.666 fused_ordering(937) 00:14:01.666 fused_ordering(938) 00:14:01.666 fused_ordering(939) 00:14:01.666 fused_ordering(940) 00:14:01.666 fused_ordering(941) 00:14:01.666 fused_ordering(942) 00:14:01.666 fused_ordering(943) 00:14:01.666 fused_ordering(944) 00:14:01.666 fused_ordering(945) 00:14:01.666 fused_ordering(946) 00:14:01.666 fused_ordering(947) 00:14:01.666 fused_ordering(948) 00:14:01.666 fused_ordering(949) 00:14:01.666 fused_ordering(950) 00:14:01.666 fused_ordering(951) 00:14:01.666 fused_ordering(952) 00:14:01.666 fused_ordering(953) 00:14:01.666 fused_ordering(954) 00:14:01.666 fused_ordering(955) 00:14:01.666 fused_ordering(956) 00:14:01.666 fused_ordering(957) 00:14:01.666 fused_ordering(958) 00:14:01.666 fused_ordering(959) 00:14:01.666 fused_ordering(960) 00:14:01.666 fused_ordering(961) 00:14:01.666 fused_ordering(962) 00:14:01.666 fused_ordering(963) 00:14:01.666 fused_ordering(964) 00:14:01.666 fused_ordering(965) 00:14:01.666 fused_ordering(966) 00:14:01.666 fused_ordering(967) 00:14:01.666 fused_ordering(968) 00:14:01.666 fused_ordering(969) 00:14:01.666 fused_ordering(970) 00:14:01.666 fused_ordering(971) 00:14:01.666 fused_ordering(972) 00:14:01.666 fused_ordering(973) 00:14:01.666 fused_ordering(974) 00:14:01.666 fused_ordering(975) 00:14:01.666 fused_ordering(976) 00:14:01.666 fused_ordering(977) 00:14:01.666 fused_ordering(978) 00:14:01.666 fused_ordering(979) 00:14:01.666 fused_ordering(980) 00:14:01.666 fused_ordering(981) 00:14:01.666 fused_ordering(982) 00:14:01.666 fused_ordering(983) 00:14:01.666 fused_ordering(984) 00:14:01.666 fused_ordering(985) 00:14:01.666 fused_ordering(986) 00:14:01.666 fused_ordering(987) 00:14:01.666 fused_ordering(988) 00:14:01.666 fused_ordering(989) 00:14:01.666 fused_ordering(990) 00:14:01.666 fused_ordering(991) 00:14:01.666 fused_ordering(992) 00:14:01.666 fused_ordering(993) 00:14:01.666 fused_ordering(994) 00:14:01.666 fused_ordering(995) 00:14:01.666 fused_ordering(996) 00:14:01.666 fused_ordering(997) 00:14:01.666 fused_ordering(998) 00:14:01.666 fused_ordering(999) 00:14:01.666 fused_ordering(1000) 00:14:01.666 fused_ordering(1001) 00:14:01.666 fused_ordering(1002) 00:14:01.666 fused_ordering(1003) 00:14:01.666 fused_ordering(1004) 00:14:01.666 fused_ordering(1005) 00:14:01.666 fused_ordering(1006) 00:14:01.666 fused_ordering(1007) 00:14:01.666 fused_ordering(1008) 00:14:01.666 fused_ordering(1009) 00:14:01.666 fused_ordering(1010) 00:14:01.666 fused_ordering(1011) 00:14:01.666 fused_ordering(1012) 00:14:01.666 fused_ordering(1013) 00:14:01.666 fused_ordering(1014) 00:14:01.666 fused_ordering(1015) 00:14:01.666 fused_ordering(1016) 00:14:01.666 fused_ordering(1017) 00:14:01.666 fused_ordering(1018) 00:14:01.666 fused_ordering(1019) 00:14:01.666 fused_ordering(1020) 00:14:01.666 fused_ordering(1021) 00:14:01.666 fused_ordering(1022) 00:14:01.666 fused_ordering(1023) 00:14:01.666 08:05:12 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:01.666 08:05:12 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:01.666 08:05:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:01.666 08:05:12 -- nvmf/common.sh@116 -- # sync 00:14:01.666 08:05:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:01.666 08:05:12 -- nvmf/common.sh@119 -- # set +e 00:14:01.666 08:05:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:01.666 08:05:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:01.666 rmmod nvme_tcp 00:14:01.666 rmmod nvme_fabrics 00:14:01.666 rmmod nvme_keyring 00:14:01.925 08:05:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:01.925 08:05:12 -- nvmf/common.sh@123 -- # set -e 00:14:01.925 08:05:12 -- nvmf/common.sh@124 -- # return 0 00:14:01.925 08:05:12 -- nvmf/common.sh@477 -- # '[' -n 82257 ']' 00:14:01.925 08:05:12 -- nvmf/common.sh@478 -- # killprocess 82257 00:14:01.925 08:05:12 -- common/autotest_common.sh@936 -- # '[' -z 82257 ']' 00:14:01.925 08:05:12 -- common/autotest_common.sh@940 -- # kill -0 82257 00:14:01.925 08:05:12 -- common/autotest_common.sh@941 -- # uname 00:14:01.925 08:05:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:01.925 08:05:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82257 00:14:01.925 killing process with pid 82257 00:14:01.925 08:05:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:01.925 08:05:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:01.925 08:05:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82257' 00:14:01.925 08:05:12 -- common/autotest_common.sh@955 -- # kill 82257 00:14:01.925 08:05:12 -- common/autotest_common.sh@960 -- # wait 82257 00:14:01.925 08:05:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:01.925 08:05:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:01.925 08:05:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:01.925 08:05:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:01.925 08:05:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:01.925 08:05:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.925 08:05:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.925 08:05:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.925 08:05:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:02.185 00:14:02.185 real 0m3.708s 00:14:02.185 user 0m4.292s 00:14:02.185 sys 0m1.263s 00:14:02.185 08:05:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:02.185 08:05:13 -- common/autotest_common.sh@10 -- # set +x 00:14:02.185 ************************************ 00:14:02.185 END TEST nvmf_fused_ordering 00:14:02.185 ************************************ 00:14:02.185 08:05:13 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:02.185 08:05:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:02.185 08:05:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:02.185 08:05:13 -- common/autotest_common.sh@10 -- # set +x 00:14:02.185 ************************************ 00:14:02.185 START TEST nvmf_delete_subsystem 00:14:02.185 ************************************ 00:14:02.185 08:05:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:02.185 * Looking for test storage... 00:14:02.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:02.185 08:05:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:02.185 08:05:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:02.185 08:05:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:02.185 08:05:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:02.185 08:05:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:02.185 08:05:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:02.185 08:05:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:02.185 08:05:13 -- scripts/common.sh@335 -- # IFS=.-: 00:14:02.185 08:05:13 -- scripts/common.sh@335 -- # read -ra ver1 00:14:02.185 08:05:13 -- scripts/common.sh@336 -- # IFS=.-: 00:14:02.185 08:05:13 -- scripts/common.sh@336 -- # read -ra ver2 00:14:02.185 08:05:13 -- scripts/common.sh@337 -- # local 'op=<' 00:14:02.185 08:05:13 -- scripts/common.sh@339 -- # ver1_l=2 00:14:02.185 08:05:13 -- scripts/common.sh@340 -- # ver2_l=1 00:14:02.185 08:05:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:02.185 08:05:13 -- scripts/common.sh@343 -- # case "$op" in 00:14:02.185 08:05:13 -- scripts/common.sh@344 -- # : 1 00:14:02.185 08:05:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:02.185 08:05:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:02.185 08:05:13 -- scripts/common.sh@364 -- # decimal 1 00:14:02.185 08:05:13 -- scripts/common.sh@352 -- # local d=1 00:14:02.185 08:05:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:02.185 08:05:13 -- scripts/common.sh@354 -- # echo 1 00:14:02.185 08:05:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:02.185 08:05:13 -- scripts/common.sh@365 -- # decimal 2 00:14:02.185 08:05:13 -- scripts/common.sh@352 -- # local d=2 00:14:02.185 08:05:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:02.185 08:05:13 -- scripts/common.sh@354 -- # echo 2 00:14:02.185 08:05:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:02.185 08:05:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:02.185 08:05:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:02.185 08:05:13 -- scripts/common.sh@367 -- # return 0 00:14:02.185 08:05:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:02.185 08:05:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:02.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.185 --rc genhtml_branch_coverage=1 00:14:02.185 --rc genhtml_function_coverage=1 00:14:02.185 --rc genhtml_legend=1 00:14:02.185 --rc geninfo_all_blocks=1 00:14:02.185 --rc geninfo_unexecuted_blocks=1 00:14:02.185 00:14:02.185 ' 00:14:02.185 08:05:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:02.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.185 --rc genhtml_branch_coverage=1 00:14:02.185 --rc genhtml_function_coverage=1 00:14:02.186 --rc genhtml_legend=1 00:14:02.186 --rc geninfo_all_blocks=1 00:14:02.186 --rc geninfo_unexecuted_blocks=1 00:14:02.186 00:14:02.186 ' 00:14:02.186 08:05:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:02.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.186 --rc genhtml_branch_coverage=1 00:14:02.186 --rc genhtml_function_coverage=1 00:14:02.186 --rc genhtml_legend=1 00:14:02.186 --rc geninfo_all_blocks=1 00:14:02.186 --rc geninfo_unexecuted_blocks=1 00:14:02.186 00:14:02.186 ' 00:14:02.186 08:05:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:02.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:02.186 --rc genhtml_branch_coverage=1 00:14:02.186 --rc genhtml_function_coverage=1 00:14:02.186 --rc genhtml_legend=1 00:14:02.186 --rc geninfo_all_blocks=1 00:14:02.186 --rc geninfo_unexecuted_blocks=1 00:14:02.186 00:14:02.186 ' 00:14:02.186 08:05:13 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:02.186 08:05:13 -- nvmf/common.sh@7 -- # uname -s 00:14:02.186 08:05:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.186 08:05:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.186 08:05:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.186 08:05:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.186 08:05:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.186 08:05:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.186 08:05:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.186 08:05:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.186 08:05:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.186 08:05:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.186 08:05:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:14:02.186 08:05:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:14:02.186 08:05:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.186 08:05:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.186 08:05:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:02.186 08:05:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:02.186 08:05:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.186 08:05:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.186 08:05:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.186 08:05:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.186 08:05:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.186 08:05:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.186 08:05:13 -- paths/export.sh@5 -- # export PATH 00:14:02.186 08:05:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.186 08:05:13 -- nvmf/common.sh@46 -- # : 0 00:14:02.186 08:05:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:02.186 08:05:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:02.186 08:05:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:02.186 08:05:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.186 08:05:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.186 08:05:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:02.186 08:05:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:02.186 08:05:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:02.186 08:05:13 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:02.186 08:05:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:02.186 08:05:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.186 08:05:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:02.186 08:05:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:02.186 08:05:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:02.186 08:05:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.186 08:05:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.186 08:05:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.186 08:05:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:02.186 08:05:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:02.186 08:05:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:02.186 08:05:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:02.186 08:05:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:02.186 08:05:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:02.186 08:05:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.186 08:05:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.186 08:05:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:02.186 08:05:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:02.186 08:05:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:02.186 08:05:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:02.186 08:05:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:02.186 08:05:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.186 08:05:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:02.186 08:05:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:02.186 08:05:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:02.186 08:05:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:02.186 08:05:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:02.445 08:05:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:02.445 Cannot find device "nvmf_tgt_br" 00:14:02.445 08:05:13 -- nvmf/common.sh@154 -- # true 00:14:02.445 08:05:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:02.445 Cannot find device "nvmf_tgt_br2" 00:14:02.445 08:05:13 -- nvmf/common.sh@155 -- # true 00:14:02.445 08:05:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:02.445 08:05:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:02.445 Cannot find device "nvmf_tgt_br" 00:14:02.445 08:05:13 -- nvmf/common.sh@157 -- # true 00:14:02.445 08:05:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:02.445 Cannot find device "nvmf_tgt_br2" 00:14:02.445 08:05:13 -- nvmf/common.sh@158 -- # true 00:14:02.445 08:05:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:02.445 08:05:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:02.445 08:05:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:02.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:02.445 08:05:13 -- nvmf/common.sh@161 -- # true 00:14:02.445 08:05:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:02.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:02.445 08:05:13 -- nvmf/common.sh@162 -- # true 00:14:02.445 08:05:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:02.445 08:05:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:02.445 08:05:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:02.445 08:05:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:02.445 08:05:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:02.445 08:05:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:02.445 08:05:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:02.445 08:05:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:02.445 08:05:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:02.445 08:05:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:02.445 08:05:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:02.445 08:05:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:02.445 08:05:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:02.445 08:05:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:02.445 08:05:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:02.445 08:05:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:02.705 08:05:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:02.705 08:05:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:02.705 08:05:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:02.705 08:05:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:02.705 08:05:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:02.705 08:05:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:02.705 08:05:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:02.705 08:05:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:02.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:14:02.705 00:14:02.705 --- 10.0.0.2 ping statistics --- 00:14:02.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.705 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:14:02.705 08:05:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:02.705 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:02.705 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:14:02.705 00:14:02.705 --- 10.0.0.3 ping statistics --- 00:14:02.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.705 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:02.705 08:05:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:02.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:02.705 00:14:02.705 --- 10.0.0.1 ping statistics --- 00:14:02.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.705 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:02.705 08:05:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.705 08:05:13 -- nvmf/common.sh@421 -- # return 0 00:14:02.705 08:05:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:02.705 08:05:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.705 08:05:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:02.705 08:05:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:02.705 08:05:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.705 08:05:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:02.705 08:05:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:02.705 08:05:13 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:02.705 08:05:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:02.705 08:05:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:02.705 08:05:13 -- common/autotest_common.sh@10 -- # set +x 00:14:02.705 08:05:13 -- nvmf/common.sh@469 -- # nvmfpid=82501 00:14:02.705 08:05:13 -- nvmf/common.sh@470 -- # waitforlisten 82501 00:14:02.705 08:05:13 -- common/autotest_common.sh@829 -- # '[' -z 82501 ']' 00:14:02.705 08:05:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.705 08:05:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:02.705 08:05:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:02.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.705 08:05:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.705 08:05:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:02.705 08:05:13 -- common/autotest_common.sh@10 -- # set +x 00:14:02.705 [2024-12-07 08:05:13.869314] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:02.705 [2024-12-07 08:05:13.869399] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.964 [2024-12-07 08:05:14.009709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:02.964 [2024-12-07 08:05:14.066612] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:02.964 [2024-12-07 08:05:14.066740] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.964 [2024-12-07 08:05:14.066751] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.964 [2024-12-07 08:05:14.066758] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.964 [2024-12-07 08:05:14.066899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.964 [2024-12-07 08:05:14.067421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.898 08:05:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:03.898 08:05:14 -- common/autotest_common.sh@862 -- # return 0 00:14:03.898 08:05:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:03.898 08:05:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:03.898 08:05:14 -- common/autotest_common.sh@10 -- # set +x 00:14:03.898 08:05:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.898 08:05:14 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:03.898 08:05:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.898 08:05:14 -- common/autotest_common.sh@10 -- # set +x 00:14:03.898 [2024-12-07 08:05:14.939447] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.898 08:05:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.898 08:05:14 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:03.898 08:05:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.898 08:05:14 -- common/autotest_common.sh@10 -- # set +x 00:14:03.898 08:05:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.898 08:05:14 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.898 08:05:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.898 08:05:14 -- common/autotest_common.sh@10 -- # set +x 00:14:03.898 [2024-12-07 08:05:14.955899] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.898 08:05:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.898 08:05:14 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:03.898 08:05:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.898 08:05:14 -- common/autotest_common.sh@10 -- # set +x 00:14:03.898 NULL1 00:14:03.898 08:05:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.898 08:05:14 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:03.898 08:05:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.898 08:05:14 -- common/autotest_common.sh@10 -- # set +x 00:14:03.898 Delay0 00:14:03.898 08:05:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.898 08:05:14 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.898 08:05:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.898 08:05:14 -- common/autotest_common.sh@10 -- # set +x 00:14:03.898 08:05:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.898 08:05:14 -- target/delete_subsystem.sh@28 -- # perf_pid=82552 00:14:03.898 08:05:14 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:03.898 08:05:14 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:03.898 [2024-12-07 08:05:15.150572] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:05.854 08:05:16 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.854 08:05:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.854 08:05:16 -- common/autotest_common.sh@10 -- # set +x 00:14:06.127 Read completed with error (sct=0, sc=8) 00:14:06.127 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 [2024-12-07 08:05:17.183563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278870 is same with the state(5) to be set 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 Write completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 Read completed with error (sct=0, sc=8) 00:14:06.128 starting I/O failed: -6 00:14:06.128 starting I/O failed: -6 00:14:06.128 starting I/O failed: -6 00:14:06.128 starting I/O failed: -6 00:14:06.128 starting I/O failed: -6 00:14:06.128 starting I/O failed: -6 00:14:06.128 starting I/O failed: -6 00:14:06.128 starting I/O failed: -6 00:14:06.128 starting I/O failed: -6 00:14:06.128 starting I/O failed: -6 00:14:06.128 starting I/O failed: -6 00:14:06.128 starting I/O failed: -6 00:14:06.128 starting I/O failed: -6 00:14:06.128 starting I/O failed: -6 00:14:06.128 starting I/O failed: -6 00:14:06.128 starting I/O failed: -6 00:14:07.062 [2024-12-07 08:05:18.164002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2277070 is same with the state(5) to be set 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 [2024-12-07 08:05:18.185127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2278bc0 is same with the state(5) to be set 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 [2024-12-07 08:05:18.185379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2279120 is same with the state(5) to be set 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 [2024-12-07 08:05:18.186554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5a9000c600 is same with the state(5) to be set 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Write completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.062 Read completed with error (sct=0, sc=8) 00:14:07.063 Read completed with error (sct=0, sc=8) 00:14:07.063 Read completed with error (sct=0, sc=8) 00:14:07.063 [2024-12-07 08:05:18.187418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5a9000bf20 is same with the state(5) to be set 00:14:07.063 [2024-12-07 08:05:18.188466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2277070 (9): Bad file descriptor 00:14:07.063 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:07.063 08:05:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.063 08:05:18 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:07.063 08:05:18 -- target/delete_subsystem.sh@35 -- # kill -0 82552 00:14:07.063 08:05:18 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:07.063 Initializing NVMe Controllers 00:14:07.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:07.063 Controller IO queue size 128, less than required. 00:14:07.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:07.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:07.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:07.063 Initialization complete. Launching workers. 00:14:07.063 ======================================================== 00:14:07.063 Latency(us) 00:14:07.063 Device Information : IOPS MiB/s Average min max 00:14:07.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.06 0.08 895569.34 346.34 1009866.89 00:14:07.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.55 0.08 990574.74 494.04 2002067.34 00:14:07.063 ======================================================== 00:14:07.063 Total : 338.61 0.17 943141.79 346.34 2002067.34 00:14:07.063 00:14:07.627 08:05:18 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:07.627 08:05:18 -- target/delete_subsystem.sh@35 -- # kill -0 82552 00:14:07.627 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (82552) - No such process 00:14:07.627 08:05:18 -- target/delete_subsystem.sh@45 -- # NOT wait 82552 00:14:07.627 08:05:18 -- common/autotest_common.sh@650 -- # local es=0 00:14:07.627 08:05:18 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 82552 00:14:07.627 08:05:18 -- common/autotest_common.sh@638 -- # local arg=wait 00:14:07.627 08:05:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.627 08:05:18 -- common/autotest_common.sh@642 -- # type -t wait 00:14:07.627 08:05:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.627 08:05:18 -- common/autotest_common.sh@653 -- # wait 82552 00:14:07.627 08:05:18 -- common/autotest_common.sh@653 -- # es=1 00:14:07.627 08:05:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:07.627 08:05:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:07.627 08:05:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:07.627 08:05:18 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:07.627 08:05:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.627 08:05:18 -- common/autotest_common.sh@10 -- # set +x 00:14:07.627 08:05:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.627 08:05:18 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.627 08:05:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.627 08:05:18 -- common/autotest_common.sh@10 -- # set +x 00:14:07.627 [2024-12-07 08:05:18.714649] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.627 08:05:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.627 08:05:18 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.627 08:05:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.627 08:05:18 -- common/autotest_common.sh@10 -- # set +x 00:14:07.627 08:05:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.627 08:05:18 -- target/delete_subsystem.sh@54 -- # perf_pid=82598 00:14:07.627 08:05:18 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:07.627 08:05:18 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:07.627 08:05:18 -- target/delete_subsystem.sh@57 -- # kill -0 82598 00:14:07.627 08:05:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:07.627 [2024-12-07 08:05:18.883421] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:08.191 08:05:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:08.191 08:05:19 -- target/delete_subsystem.sh@57 -- # kill -0 82598 00:14:08.191 08:05:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:08.756 08:05:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:08.756 08:05:19 -- target/delete_subsystem.sh@57 -- # kill -0 82598 00:14:08.756 08:05:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:09.013 08:05:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:09.013 08:05:20 -- target/delete_subsystem.sh@57 -- # kill -0 82598 00:14:09.014 08:05:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:09.579 08:05:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:09.579 08:05:20 -- target/delete_subsystem.sh@57 -- # kill -0 82598 00:14:09.579 08:05:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:10.144 08:05:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:10.144 08:05:21 -- target/delete_subsystem.sh@57 -- # kill -0 82598 00:14:10.144 08:05:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:10.710 08:05:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:10.710 08:05:21 -- target/delete_subsystem.sh@57 -- # kill -0 82598 00:14:10.710 08:05:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:10.710 Initializing NVMe Controllers 00:14:10.710 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:10.710 Controller IO queue size 128, less than required. 00:14:10.710 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:10.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:10.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:10.710 Initialization complete. Launching workers. 00:14:10.710 ======================================================== 00:14:10.710 Latency(us) 00:14:10.710 Device Information : IOPS MiB/s Average min max 00:14:10.710 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003289.03 1000161.29 1041944.16 00:14:10.710 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005464.02 1000185.06 1042630.67 00:14:10.710 ======================================================== 00:14:10.710 Total : 256.00 0.12 1004376.53 1000161.29 1042630.67 00:14:10.710 00:14:11.275 08:05:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:11.275 08:05:22 -- target/delete_subsystem.sh@57 -- # kill -0 82598 00:14:11.275 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (82598) - No such process 00:14:11.275 08:05:22 -- target/delete_subsystem.sh@67 -- # wait 82598 00:14:11.275 08:05:22 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:11.275 08:05:22 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:11.275 08:05:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:11.275 08:05:22 -- nvmf/common.sh@116 -- # sync 00:14:11.275 08:05:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:11.275 08:05:22 -- nvmf/common.sh@119 -- # set +e 00:14:11.275 08:05:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:11.275 08:05:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:11.275 rmmod nvme_tcp 00:14:11.275 rmmod nvme_fabrics 00:14:11.275 rmmod nvme_keyring 00:14:11.275 08:05:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:11.275 08:05:22 -- nvmf/common.sh@123 -- # set -e 00:14:11.275 08:05:22 -- nvmf/common.sh@124 -- # return 0 00:14:11.275 08:05:22 -- nvmf/common.sh@477 -- # '[' -n 82501 ']' 00:14:11.275 08:05:22 -- nvmf/common.sh@478 -- # killprocess 82501 00:14:11.275 08:05:22 -- common/autotest_common.sh@936 -- # '[' -z 82501 ']' 00:14:11.275 08:05:22 -- common/autotest_common.sh@940 -- # kill -0 82501 00:14:11.275 08:05:22 -- common/autotest_common.sh@941 -- # uname 00:14:11.275 08:05:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:11.275 08:05:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82501 00:14:11.275 killing process with pid 82501 00:14:11.275 08:05:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:11.275 08:05:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:11.275 08:05:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82501' 00:14:11.275 08:05:22 -- common/autotest_common.sh@955 -- # kill 82501 00:14:11.275 08:05:22 -- common/autotest_common.sh@960 -- # wait 82501 00:14:11.533 08:05:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:11.533 08:05:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:11.533 08:05:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:11.533 08:05:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:11.533 08:05:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:11.533 08:05:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.533 08:05:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.533 08:05:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.533 08:05:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:11.533 00:14:11.533 real 0m9.394s 00:14:11.533 user 0m28.986s 00:14:11.533 sys 0m1.492s 00:14:11.533 08:05:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:11.533 ************************************ 00:14:11.533 END TEST nvmf_delete_subsystem 00:14:11.533 ************************************ 00:14:11.533 08:05:22 -- common/autotest_common.sh@10 -- # set +x 00:14:11.533 08:05:22 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:14:11.533 08:05:22 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:14:11.533 08:05:22 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:11.533 08:05:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:11.533 08:05:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:11.533 08:05:22 -- common/autotest_common.sh@10 -- # set +x 00:14:11.533 ************************************ 00:14:11.533 START TEST nvmf_host_management 00:14:11.533 ************************************ 00:14:11.533 08:05:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:11.533 * Looking for test storage... 00:14:11.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:11.533 08:05:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:11.533 08:05:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:11.533 08:05:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:11.792 08:05:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:11.793 08:05:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:11.793 08:05:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:11.793 08:05:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:11.793 08:05:22 -- scripts/common.sh@335 -- # IFS=.-: 00:14:11.793 08:05:22 -- scripts/common.sh@335 -- # read -ra ver1 00:14:11.793 08:05:22 -- scripts/common.sh@336 -- # IFS=.-: 00:14:11.793 08:05:22 -- scripts/common.sh@336 -- # read -ra ver2 00:14:11.793 08:05:22 -- scripts/common.sh@337 -- # local 'op=<' 00:14:11.793 08:05:22 -- scripts/common.sh@339 -- # ver1_l=2 00:14:11.793 08:05:22 -- scripts/common.sh@340 -- # ver2_l=1 00:14:11.793 08:05:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:11.793 08:05:22 -- scripts/common.sh@343 -- # case "$op" in 00:14:11.793 08:05:22 -- scripts/common.sh@344 -- # : 1 00:14:11.793 08:05:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:11.793 08:05:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:11.793 08:05:22 -- scripts/common.sh@364 -- # decimal 1 00:14:11.793 08:05:22 -- scripts/common.sh@352 -- # local d=1 00:14:11.793 08:05:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:11.793 08:05:22 -- scripts/common.sh@354 -- # echo 1 00:14:11.793 08:05:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:11.793 08:05:22 -- scripts/common.sh@365 -- # decimal 2 00:14:11.793 08:05:22 -- scripts/common.sh@352 -- # local d=2 00:14:11.793 08:05:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:11.793 08:05:22 -- scripts/common.sh@354 -- # echo 2 00:14:11.793 08:05:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:11.793 08:05:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:11.793 08:05:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:11.793 08:05:22 -- scripts/common.sh@367 -- # return 0 00:14:11.793 08:05:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:11.793 08:05:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:11.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.793 --rc genhtml_branch_coverage=1 00:14:11.793 --rc genhtml_function_coverage=1 00:14:11.793 --rc genhtml_legend=1 00:14:11.793 --rc geninfo_all_blocks=1 00:14:11.793 --rc geninfo_unexecuted_blocks=1 00:14:11.793 00:14:11.793 ' 00:14:11.793 08:05:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:11.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.793 --rc genhtml_branch_coverage=1 00:14:11.793 --rc genhtml_function_coverage=1 00:14:11.793 --rc genhtml_legend=1 00:14:11.793 --rc geninfo_all_blocks=1 00:14:11.793 --rc geninfo_unexecuted_blocks=1 00:14:11.793 00:14:11.793 ' 00:14:11.793 08:05:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:11.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.793 --rc genhtml_branch_coverage=1 00:14:11.793 --rc genhtml_function_coverage=1 00:14:11.793 --rc genhtml_legend=1 00:14:11.793 --rc geninfo_all_blocks=1 00:14:11.793 --rc geninfo_unexecuted_blocks=1 00:14:11.793 00:14:11.793 ' 00:14:11.793 08:05:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:11.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.793 --rc genhtml_branch_coverage=1 00:14:11.793 --rc genhtml_function_coverage=1 00:14:11.793 --rc genhtml_legend=1 00:14:11.793 --rc geninfo_all_blocks=1 00:14:11.793 --rc geninfo_unexecuted_blocks=1 00:14:11.793 00:14:11.793 ' 00:14:11.793 08:05:22 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:11.793 08:05:22 -- nvmf/common.sh@7 -- # uname -s 00:14:11.793 08:05:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.793 08:05:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.793 08:05:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.793 08:05:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.793 08:05:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.793 08:05:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.793 08:05:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.793 08:05:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.793 08:05:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.793 08:05:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.793 08:05:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:14:11.793 08:05:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:14:11.793 08:05:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.793 08:05:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.793 08:05:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:11.793 08:05:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:11.793 08:05:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.793 08:05:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.793 08:05:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.793 08:05:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.793 08:05:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.793 08:05:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.793 08:05:22 -- paths/export.sh@5 -- # export PATH 00:14:11.793 08:05:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.793 08:05:22 -- nvmf/common.sh@46 -- # : 0 00:14:11.793 08:05:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:11.793 08:05:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:11.793 08:05:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:11.793 08:05:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.793 08:05:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.793 08:05:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:11.793 08:05:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:11.793 08:05:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:11.793 08:05:22 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:11.793 08:05:22 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:11.793 08:05:22 -- target/host_management.sh@104 -- # nvmftestinit 00:14:11.793 08:05:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:11.793 08:05:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.793 08:05:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:11.793 08:05:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:11.793 08:05:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:11.793 08:05:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.793 08:05:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.793 08:05:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.793 08:05:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:11.793 08:05:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:11.793 08:05:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:11.793 08:05:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:11.793 08:05:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:11.793 08:05:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:11.793 08:05:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:11.793 08:05:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:11.793 08:05:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:11.793 08:05:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:11.793 08:05:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:11.793 08:05:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:11.793 08:05:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:11.793 08:05:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.793 08:05:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:11.793 08:05:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:11.793 08:05:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:11.793 08:05:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:11.793 08:05:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:11.793 08:05:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:11.793 Cannot find device "nvmf_tgt_br" 00:14:11.793 08:05:22 -- nvmf/common.sh@154 -- # true 00:14:11.793 08:05:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:11.793 Cannot find device "nvmf_tgt_br2" 00:14:11.793 08:05:22 -- nvmf/common.sh@155 -- # true 00:14:11.793 08:05:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:11.793 08:05:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:11.793 Cannot find device "nvmf_tgt_br" 00:14:11.793 08:05:22 -- nvmf/common.sh@157 -- # true 00:14:11.793 08:05:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:11.793 Cannot find device "nvmf_tgt_br2" 00:14:11.794 08:05:22 -- nvmf/common.sh@158 -- # true 00:14:11.794 08:05:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:11.794 08:05:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:11.794 08:05:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:11.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:11.794 08:05:23 -- nvmf/common.sh@161 -- # true 00:14:11.794 08:05:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:11.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:11.794 08:05:23 -- nvmf/common.sh@162 -- # true 00:14:11.794 08:05:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:11.794 08:05:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:12.053 08:05:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:12.053 08:05:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:12.053 08:05:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:12.053 08:05:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:12.053 08:05:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:12.053 08:05:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:12.053 08:05:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:12.053 08:05:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:12.053 08:05:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:12.053 08:05:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:12.053 08:05:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:12.053 08:05:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:12.053 08:05:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:12.053 08:05:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:12.053 08:05:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:12.053 08:05:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:12.053 08:05:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:12.053 08:05:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:12.053 08:05:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:12.053 08:05:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:12.053 08:05:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:12.053 08:05:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:12.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:14:12.053 00:14:12.053 --- 10.0.0.2 ping statistics --- 00:14:12.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.053 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:14:12.053 08:05:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:12.053 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:12.053 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:14:12.053 00:14:12.053 --- 10.0.0.3 ping statistics --- 00:14:12.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.053 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:12.053 08:05:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:12.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:14:12.053 00:14:12.053 --- 10.0.0.1 ping statistics --- 00:14:12.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.053 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:14:12.053 08:05:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.053 08:05:23 -- nvmf/common.sh@421 -- # return 0 00:14:12.053 08:05:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:12.053 08:05:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.053 08:05:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:12.053 08:05:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:12.053 08:05:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.053 08:05:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:12.053 08:05:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:12.053 08:05:23 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:14:12.053 08:05:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:12.053 08:05:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:12.053 08:05:23 -- common/autotest_common.sh@10 -- # set +x 00:14:12.053 ************************************ 00:14:12.053 START TEST nvmf_host_management 00:14:12.053 ************************************ 00:14:12.053 08:05:23 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:14:12.053 08:05:23 -- target/host_management.sh@69 -- # starttarget 00:14:12.053 08:05:23 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:12.053 08:05:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:12.053 08:05:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:12.053 08:05:23 -- common/autotest_common.sh@10 -- # set +x 00:14:12.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.053 08:05:23 -- nvmf/common.sh@469 -- # nvmfpid=82835 00:14:12.053 08:05:23 -- nvmf/common.sh@470 -- # waitforlisten 82835 00:14:12.053 08:05:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:12.053 08:05:23 -- common/autotest_common.sh@829 -- # '[' -z 82835 ']' 00:14:12.053 08:05:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.053 08:05:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:12.053 08:05:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.053 08:05:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:12.053 08:05:23 -- common/autotest_common.sh@10 -- # set +x 00:14:12.312 [2024-12-07 08:05:23.358804] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:12.312 [2024-12-07 08:05:23.358887] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.312 [2024-12-07 08:05:23.495474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:12.313 [2024-12-07 08:05:23.554713] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:12.313 [2024-12-07 08:05:23.555006] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.313 [2024-12-07 08:05:23.555083] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.313 [2024-12-07 08:05:23.555448] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.313 [2024-12-07 08:05:23.555635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.313 [2024-12-07 08:05:23.556097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:12.313 [2024-12-07 08:05:23.556260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:12.313 [2024-12-07 08:05:23.556261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.249 08:05:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.249 08:05:24 -- common/autotest_common.sh@862 -- # return 0 00:14:13.249 08:05:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:13.249 08:05:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:13.249 08:05:24 -- common/autotest_common.sh@10 -- # set +x 00:14:13.249 08:05:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.249 08:05:24 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:13.249 08:05:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.249 08:05:24 -- common/autotest_common.sh@10 -- # set +x 00:14:13.249 [2024-12-07 08:05:24.404258] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.249 08:05:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.249 08:05:24 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:13.249 08:05:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:13.249 08:05:24 -- common/autotest_common.sh@10 -- # set +x 00:14:13.249 08:05:24 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:13.249 08:05:24 -- target/host_management.sh@23 -- # cat 00:14:13.249 08:05:24 -- target/host_management.sh@30 -- # rpc_cmd 00:14:13.249 08:05:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.249 08:05:24 -- common/autotest_common.sh@10 -- # set +x 00:14:13.249 Malloc0 00:14:13.249 [2024-12-07 08:05:24.482287] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.249 08:05:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.249 08:05:24 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:13.249 08:05:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:13.249 08:05:24 -- common/autotest_common.sh@10 -- # set +x 00:14:13.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:13.508 08:05:24 -- target/host_management.sh@73 -- # perfpid=82917 00:14:13.508 08:05:24 -- target/host_management.sh@74 -- # waitforlisten 82917 /var/tmp/bdevperf.sock 00:14:13.508 08:05:24 -- common/autotest_common.sh@829 -- # '[' -z 82917 ']' 00:14:13.508 08:05:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:13.508 08:05:24 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:13.508 08:05:24 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:13.508 08:05:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.508 08:05:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:13.508 08:05:24 -- nvmf/common.sh@520 -- # config=() 00:14:13.508 08:05:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.508 08:05:24 -- nvmf/common.sh@520 -- # local subsystem config 00:14:13.508 08:05:24 -- common/autotest_common.sh@10 -- # set +x 00:14:13.508 08:05:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:13.508 08:05:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:13.508 { 00:14:13.508 "params": { 00:14:13.508 "name": "Nvme$subsystem", 00:14:13.508 "trtype": "$TEST_TRANSPORT", 00:14:13.508 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:13.508 "adrfam": "ipv4", 00:14:13.508 "trsvcid": "$NVMF_PORT", 00:14:13.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:13.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:13.508 "hdgst": ${hdgst:-false}, 00:14:13.508 "ddgst": ${ddgst:-false} 00:14:13.508 }, 00:14:13.508 "method": "bdev_nvme_attach_controller" 00:14:13.508 } 00:14:13.508 EOF 00:14:13.508 )") 00:14:13.508 08:05:24 -- nvmf/common.sh@542 -- # cat 00:14:13.508 08:05:24 -- nvmf/common.sh@544 -- # jq . 00:14:13.508 08:05:24 -- nvmf/common.sh@545 -- # IFS=, 00:14:13.508 08:05:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:13.508 "params": { 00:14:13.508 "name": "Nvme0", 00:14:13.508 "trtype": "tcp", 00:14:13.508 "traddr": "10.0.0.2", 00:14:13.508 "adrfam": "ipv4", 00:14:13.508 "trsvcid": "4420", 00:14:13.508 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:13.508 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:13.508 "hdgst": false, 00:14:13.508 "ddgst": false 00:14:13.508 }, 00:14:13.508 "method": "bdev_nvme_attach_controller" 00:14:13.508 }' 00:14:13.508 [2024-12-07 08:05:24.585551] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:13.508 [2024-12-07 08:05:24.585825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82917 ] 00:14:13.508 [2024-12-07 08:05:24.726149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.766 [2024-12-07 08:05:24.785454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.766 Running I/O for 10 seconds... 00:14:14.704 08:05:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:14.704 08:05:25 -- common/autotest_common.sh@862 -- # return 0 00:14:14.704 08:05:25 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:14.704 08:05:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.704 08:05:25 -- common/autotest_common.sh@10 -- # set +x 00:14:14.704 08:05:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.704 08:05:25 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:14.704 08:05:25 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:14.704 08:05:25 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:14.704 08:05:25 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:14.704 08:05:25 -- target/host_management.sh@52 -- # local ret=1 00:14:14.704 08:05:25 -- target/host_management.sh@53 -- # local i 00:14:14.704 08:05:25 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:14.704 08:05:25 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:14.704 08:05:25 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:14.704 08:05:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.704 08:05:25 -- common/autotest_common.sh@10 -- # set +x 00:14:14.704 08:05:25 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:14.704 08:05:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.704 08:05:25 -- target/host_management.sh@55 -- # read_io_count=2494 00:14:14.704 08:05:25 -- target/host_management.sh@58 -- # '[' 2494 -ge 100 ']' 00:14:14.704 08:05:25 -- target/host_management.sh@59 -- # ret=0 00:14:14.704 08:05:25 -- target/host_management.sh@60 -- # break 00:14:14.704 08:05:25 -- target/host_management.sh@64 -- # return 0 00:14:14.705 08:05:25 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:14.705 08:05:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.705 08:05:25 -- common/autotest_common.sh@10 -- # set +x 00:14:14.705 [2024-12-07 08:05:25.695299] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695361] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695373] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695382] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695391] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695410] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695419] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695427] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695436] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695444] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695452] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695460] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695468] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695476] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695484] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695492] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695499] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695507] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695520] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695545] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695553] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695561] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695569] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695577] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695585] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695609] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695616] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.695624] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x174de70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.696859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.705 [2024-12-07 08:05:25.696914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.696928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.705 [2024-12-07 08:05:25.696938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.696948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.705 [2024-12-07 08:05:25.696957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.696967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.705 [2024-12-07 08:05:25.696976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.696985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f45a70 is same with the state(5) to be set 00:14:14.705 [2024-12-07 08:05:25.697328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.705 [2024-12-07 08:05:25.697357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.697378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.705 [2024-12-07 08:05:25.697388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.697400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.705 [2024-12-07 08:05:25.697409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.697420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.705 [2024-12-07 08:05:25.697429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.697440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.705 [2024-12-07 08:05:25.697450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.697462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.705 [2024-12-07 08:05:25.697471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.697482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.705 [2024-12-07 08:05:25.697491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.697503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.705 [2024-12-07 08:05:25.697512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.697526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.705 [2024-12-07 08:05:25.697535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.697547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.705 [2024-12-07 08:05:25.697555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.697567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.705 [2024-12-07 08:05:25.697576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.697587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.705 [2024-12-07 08:05:25.697596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.697607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.705 [2024-12-07 08:05:25.697616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.697627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.705 [2024-12-07 08:05:25.697636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.697647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.705 [2024-12-07 08:05:25.697657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.697675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.705 [2024-12-07 08:05:25.697684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.697696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.705 [2024-12-07 08:05:25.697706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.697718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.705 [2024-12-07 08:05:25.697727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.697739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.705 [2024-12-07 08:05:25.697748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.697759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.705 [2024-12-07 08:05:25.697768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.705 [2024-12-07 08:05:25.697779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.697788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.697800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.697809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.697820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.697830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.697842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.697851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.697862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.697871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.697882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.697891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.697902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.697911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.697922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.697931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.697943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.697952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.697963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.697971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.697983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.697992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.706 [2024-12-07 08:05:25.698614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.706 [2024-12-07 08:05:25.698626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.707 [2024-12-07 08:05:25.698635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.707 [2024-12-07 08:05:25.698645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.707 [2024-12-07 08:05:25.698654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.707 [2024-12-07 08:05:25.698666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.707 [2024-12-07 08:05:25.698675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.707 [2024-12-07 08:05:25.698692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:14.707 [2024-12-07 08:05:25.698701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.707 [2024-12-07 08:05:25.698712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9dc0 is same with the state(5) to be set 00:14:14.707 [2024-12-07 08:05:25.698774] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fe9dc0 was disconnected and freed. reset controller. 00:14:14.707 08:05:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.707 [2024-12-07 08:05:25.699881] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:14.707 08:05:25 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:14.707 08:05:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.707 08:05:25 -- common/autotest_common.sh@10 -- # set +x 00:14:14.707 task offset: 83968 on job bdev=Nvme0n1 fails 00:14:14.707 00:14:14.707 Latency(us) 00:14:14.707 [2024-12-07T08:05:25.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.707 [2024-12-07T08:05:25.983Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:14.707 [2024-12-07T08:05:25.983Z] Job: Nvme0n1 ended in about 0.74 seconds with error 00:14:14.707 Verification LBA range: start 0x0 length 0x400 00:14:14.707 Nvme0n1 : 0.74 3605.57 225.35 86.01 0.00 17061.81 2293.76 23235.49 00:14:14.707 [2024-12-07T08:05:25.983Z] =================================================================================================================== 00:14:14.707 [2024-12-07T08:05:25.983Z] Total : 3605.57 225.35 86.01 0.00 17061.81 2293.76 23235.49 00:14:14.707 [2024-12-07 08:05:25.701830] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:14.707 [2024-12-07 08:05:25.701860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f45a70 (9): Bad file descriptor 00:14:14.707 [2024-12-07 08:05:25.702782] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:14:14.707 [2024-12-07 08:05:25.702880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:14.707 [2024-12-07 08:05:25.702919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.707 [2024-12-07 08:05:25.702935] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:14:14.707 [2024-12-07 08:05:25.702946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:14:14.707 [2024-12-07 08:05:25.702956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:14:14.707 [2024-12-07 08:05:25.702964] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f45a70 00:14:14.707 [2024-12-07 08:05:25.703000] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f45a70 (9): Bad file descriptor 00:14:14.707 [2024-12-07 08:05:25.703017] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:14:14.707 [2024-12-07 08:05:25.703027] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:14:14.707 [2024-12-07 08:05:25.703037] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:14:14.707 [2024-12-07 08:05:25.703053] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:14:14.707 08:05:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.707 08:05:25 -- target/host_management.sh@87 -- # sleep 1 00:14:15.643 08:05:26 -- target/host_management.sh@91 -- # kill -9 82917 00:14:15.643 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (82917) - No such process 00:14:15.643 08:05:26 -- target/host_management.sh@91 -- # true 00:14:15.643 08:05:26 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:15.643 08:05:26 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:15.643 08:05:26 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:15.643 08:05:26 -- nvmf/common.sh@520 -- # config=() 00:14:15.643 08:05:26 -- nvmf/common.sh@520 -- # local subsystem config 00:14:15.643 08:05:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:15.643 08:05:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:15.643 { 00:14:15.643 "params": { 00:14:15.643 "name": "Nvme$subsystem", 00:14:15.643 "trtype": "$TEST_TRANSPORT", 00:14:15.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:15.643 "adrfam": "ipv4", 00:14:15.643 "trsvcid": "$NVMF_PORT", 00:14:15.643 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:15.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:15.643 "hdgst": ${hdgst:-false}, 00:14:15.643 "ddgst": ${ddgst:-false} 00:14:15.643 }, 00:14:15.643 "method": "bdev_nvme_attach_controller" 00:14:15.643 } 00:14:15.643 EOF 00:14:15.643 )") 00:14:15.643 08:05:26 -- nvmf/common.sh@542 -- # cat 00:14:15.643 08:05:26 -- nvmf/common.sh@544 -- # jq . 00:14:15.643 08:05:26 -- nvmf/common.sh@545 -- # IFS=, 00:14:15.643 08:05:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:15.643 "params": { 00:14:15.643 "name": "Nvme0", 00:14:15.643 "trtype": "tcp", 00:14:15.644 "traddr": "10.0.0.2", 00:14:15.644 "adrfam": "ipv4", 00:14:15.644 "trsvcid": "4420", 00:14:15.644 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:15.644 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:15.644 "hdgst": false, 00:14:15.644 "ddgst": false 00:14:15.644 }, 00:14:15.644 "method": "bdev_nvme_attach_controller" 00:14:15.644 }' 00:14:15.644 [2024-12-07 08:05:26.768892] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:15.644 [2024-12-07 08:05:26.768985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82967 ] 00:14:15.644 [2024-12-07 08:05:26.910198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.903 [2024-12-07 08:05:26.964465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.903 Running I/O for 1 seconds... 00:14:17.281 00:14:17.281 Latency(us) 00:14:17.281 [2024-12-07T08:05:28.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.281 [2024-12-07T08:05:28.557Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:17.281 Verification LBA range: start 0x0 length 0x400 00:14:17.281 Nvme0n1 : 1.00 3834.77 239.67 0.00 0.00 16422.82 677.70 22997.18 00:14:17.281 [2024-12-07T08:05:28.557Z] =================================================================================================================== 00:14:17.281 [2024-12-07T08:05:28.557Z] Total : 3834.77 239.67 0.00 0.00 16422.82 677.70 22997.18 00:14:17.281 08:05:28 -- target/host_management.sh@101 -- # stoptarget 00:14:17.281 08:05:28 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:17.281 08:05:28 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:17.281 08:05:28 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:17.281 08:05:28 -- target/host_management.sh@40 -- # nvmftestfini 00:14:17.281 08:05:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:17.281 08:05:28 -- nvmf/common.sh@116 -- # sync 00:14:17.281 08:05:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:17.281 08:05:28 -- nvmf/common.sh@119 -- # set +e 00:14:17.281 08:05:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:17.281 08:05:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:17.281 rmmod nvme_tcp 00:14:17.281 rmmod nvme_fabrics 00:14:17.281 rmmod nvme_keyring 00:14:17.281 08:05:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:17.281 08:05:28 -- nvmf/common.sh@123 -- # set -e 00:14:17.281 08:05:28 -- nvmf/common.sh@124 -- # return 0 00:14:17.281 08:05:28 -- nvmf/common.sh@477 -- # '[' -n 82835 ']' 00:14:17.281 08:05:28 -- nvmf/common.sh@478 -- # killprocess 82835 00:14:17.281 08:05:28 -- common/autotest_common.sh@936 -- # '[' -z 82835 ']' 00:14:17.281 08:05:28 -- common/autotest_common.sh@940 -- # kill -0 82835 00:14:17.281 08:05:28 -- common/autotest_common.sh@941 -- # uname 00:14:17.281 08:05:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:17.281 08:05:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82835 00:14:17.281 killing process with pid 82835 00:14:17.281 08:05:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:17.281 08:05:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:17.281 08:05:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82835' 00:14:17.281 08:05:28 -- common/autotest_common.sh@955 -- # kill 82835 00:14:17.281 08:05:28 -- common/autotest_common.sh@960 -- # wait 82835 00:14:17.540 [2024-12-07 08:05:28.667803] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:17.540 08:05:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:17.540 08:05:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:17.540 08:05:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:17.540 08:05:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:17.540 08:05:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:17.540 08:05:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.540 08:05:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.540 08:05:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.540 08:05:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:17.540 00:14:17.540 real 0m5.425s 00:14:17.540 user 0m22.900s 00:14:17.540 sys 0m1.329s 00:14:17.540 08:05:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:17.540 08:05:28 -- common/autotest_common.sh@10 -- # set +x 00:14:17.540 ************************************ 00:14:17.540 END TEST nvmf_host_management 00:14:17.540 ************************************ 00:14:17.540 08:05:28 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:14:17.540 ************************************ 00:14:17.540 END TEST nvmf_host_management 00:14:17.540 ************************************ 00:14:17.540 00:14:17.540 real 0m6.081s 00:14:17.540 user 0m23.107s 00:14:17.540 sys 0m1.591s 00:14:17.540 08:05:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:17.540 08:05:28 -- common/autotest_common.sh@10 -- # set +x 00:14:17.801 08:05:28 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:17.801 08:05:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:17.801 08:05:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:17.801 08:05:28 -- common/autotest_common.sh@10 -- # set +x 00:14:17.801 ************************************ 00:14:17.801 START TEST nvmf_lvol 00:14:17.801 ************************************ 00:14:17.801 08:05:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:17.801 * Looking for test storage... 00:14:17.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:17.801 08:05:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:17.801 08:05:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:17.801 08:05:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:17.801 08:05:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:17.801 08:05:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:17.801 08:05:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:17.801 08:05:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:17.801 08:05:29 -- scripts/common.sh@335 -- # IFS=.-: 00:14:17.801 08:05:29 -- scripts/common.sh@335 -- # read -ra ver1 00:14:17.801 08:05:29 -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.801 08:05:29 -- scripts/common.sh@336 -- # read -ra ver2 00:14:17.801 08:05:29 -- scripts/common.sh@337 -- # local 'op=<' 00:14:17.801 08:05:29 -- scripts/common.sh@339 -- # ver1_l=2 00:14:17.801 08:05:29 -- scripts/common.sh@340 -- # ver2_l=1 00:14:17.801 08:05:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:17.801 08:05:29 -- scripts/common.sh@343 -- # case "$op" in 00:14:17.801 08:05:29 -- scripts/common.sh@344 -- # : 1 00:14:17.801 08:05:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:17.801 08:05:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.801 08:05:29 -- scripts/common.sh@364 -- # decimal 1 00:14:17.801 08:05:29 -- scripts/common.sh@352 -- # local d=1 00:14:17.801 08:05:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.801 08:05:29 -- scripts/common.sh@354 -- # echo 1 00:14:17.801 08:05:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:17.801 08:05:29 -- scripts/common.sh@365 -- # decimal 2 00:14:17.801 08:05:29 -- scripts/common.sh@352 -- # local d=2 00:14:17.801 08:05:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:17.801 08:05:29 -- scripts/common.sh@354 -- # echo 2 00:14:17.801 08:05:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:17.801 08:05:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:17.801 08:05:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:17.801 08:05:29 -- scripts/common.sh@367 -- # return 0 00:14:17.801 08:05:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:17.801 08:05:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:17.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.801 --rc genhtml_branch_coverage=1 00:14:17.801 --rc genhtml_function_coverage=1 00:14:17.801 --rc genhtml_legend=1 00:14:17.801 --rc geninfo_all_blocks=1 00:14:17.801 --rc geninfo_unexecuted_blocks=1 00:14:17.801 00:14:17.801 ' 00:14:17.801 08:05:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:17.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.801 --rc genhtml_branch_coverage=1 00:14:17.801 --rc genhtml_function_coverage=1 00:14:17.801 --rc genhtml_legend=1 00:14:17.801 --rc geninfo_all_blocks=1 00:14:17.801 --rc geninfo_unexecuted_blocks=1 00:14:17.801 00:14:17.801 ' 00:14:17.801 08:05:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:17.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.801 --rc genhtml_branch_coverage=1 00:14:17.801 --rc genhtml_function_coverage=1 00:14:17.801 --rc genhtml_legend=1 00:14:17.801 --rc geninfo_all_blocks=1 00:14:17.801 --rc geninfo_unexecuted_blocks=1 00:14:17.801 00:14:17.801 ' 00:14:17.801 08:05:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:17.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.801 --rc genhtml_branch_coverage=1 00:14:17.801 --rc genhtml_function_coverage=1 00:14:17.801 --rc genhtml_legend=1 00:14:17.801 --rc geninfo_all_blocks=1 00:14:17.801 --rc geninfo_unexecuted_blocks=1 00:14:17.801 00:14:17.801 ' 00:14:17.801 08:05:29 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:17.801 08:05:29 -- nvmf/common.sh@7 -- # uname -s 00:14:17.801 08:05:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.801 08:05:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.801 08:05:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.801 08:05:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.801 08:05:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.801 08:05:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.801 08:05:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.801 08:05:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.801 08:05:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.801 08:05:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.801 08:05:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:14:17.801 08:05:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:14:17.801 08:05:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.801 08:05:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.801 08:05:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:17.801 08:05:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:17.801 08:05:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.801 08:05:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.801 08:05:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.801 08:05:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.801 08:05:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.801 08:05:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.801 08:05:29 -- paths/export.sh@5 -- # export PATH 00:14:17.801 08:05:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.801 08:05:29 -- nvmf/common.sh@46 -- # : 0 00:14:17.801 08:05:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:17.801 08:05:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:17.801 08:05:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:17.801 08:05:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.801 08:05:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.801 08:05:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:17.801 08:05:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:17.801 08:05:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:17.801 08:05:29 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:17.801 08:05:29 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:17.801 08:05:29 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:17.801 08:05:29 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:17.801 08:05:29 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:17.801 08:05:29 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:17.802 08:05:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:17.802 08:05:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.802 08:05:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:17.802 08:05:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:17.802 08:05:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:17.802 08:05:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.802 08:05:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.802 08:05:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.802 08:05:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:17.802 08:05:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:17.802 08:05:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:17.802 08:05:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:17.802 08:05:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:17.802 08:05:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:17.802 08:05:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.802 08:05:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.802 08:05:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:17.802 08:05:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:17.802 08:05:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:17.802 08:05:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:17.802 08:05:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:17.802 08:05:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.802 08:05:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:17.802 08:05:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:17.802 08:05:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:17.802 08:05:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:17.802 08:05:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:17.802 08:05:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:18.066 Cannot find device "nvmf_tgt_br" 00:14:18.066 08:05:29 -- nvmf/common.sh@154 -- # true 00:14:18.066 08:05:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:18.066 Cannot find device "nvmf_tgt_br2" 00:14:18.066 08:05:29 -- nvmf/common.sh@155 -- # true 00:14:18.066 08:05:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:18.066 08:05:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:18.066 Cannot find device "nvmf_tgt_br" 00:14:18.066 08:05:29 -- nvmf/common.sh@157 -- # true 00:14:18.066 08:05:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:18.066 Cannot find device "nvmf_tgt_br2" 00:14:18.066 08:05:29 -- nvmf/common.sh@158 -- # true 00:14:18.066 08:05:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:18.066 08:05:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:18.066 08:05:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:18.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:18.066 08:05:29 -- nvmf/common.sh@161 -- # true 00:14:18.066 08:05:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:18.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:18.066 08:05:29 -- nvmf/common.sh@162 -- # true 00:14:18.066 08:05:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:18.066 08:05:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:18.066 08:05:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:18.066 08:05:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:18.066 08:05:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:18.066 08:05:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:18.066 08:05:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:18.066 08:05:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:18.066 08:05:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:18.066 08:05:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:18.066 08:05:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:18.066 08:05:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:18.066 08:05:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:18.066 08:05:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:18.066 08:05:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:18.066 08:05:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:18.066 08:05:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:18.066 08:05:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:18.066 08:05:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:18.066 08:05:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:18.066 08:05:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:18.066 08:05:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:18.066 08:05:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:18.066 08:05:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:18.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:14:18.066 00:14:18.066 --- 10.0.0.2 ping statistics --- 00:14:18.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.067 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:14:18.067 08:05:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:18.067 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:18.067 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:14:18.067 00:14:18.067 --- 10.0.0.3 ping statistics --- 00:14:18.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.067 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:14:18.067 08:05:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:18.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:18.067 00:14:18.067 --- 10.0.0.1 ping statistics --- 00:14:18.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.067 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:18.067 08:05:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.067 08:05:29 -- nvmf/common.sh@421 -- # return 0 00:14:18.067 08:05:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:18.067 08:05:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.067 08:05:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:18.067 08:05:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:18.067 08:05:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.067 08:05:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:18.067 08:05:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:18.326 08:05:29 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:18.326 08:05:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:18.326 08:05:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:18.326 08:05:29 -- common/autotest_common.sh@10 -- # set +x 00:14:18.326 08:05:29 -- nvmf/common.sh@469 -- # nvmfpid=83203 00:14:18.326 08:05:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:18.326 08:05:29 -- nvmf/common.sh@470 -- # waitforlisten 83203 00:14:18.326 08:05:29 -- common/autotest_common.sh@829 -- # '[' -z 83203 ']' 00:14:18.326 08:05:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.326 08:05:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:18.326 08:05:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.326 08:05:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:18.326 08:05:29 -- common/autotest_common.sh@10 -- # set +x 00:14:18.326 [2024-12-07 08:05:29.406525] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:18.326 [2024-12-07 08:05:29.406607] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.326 [2024-12-07 08:05:29.534948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:18.585 [2024-12-07 08:05:29.601543] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:18.585 [2024-12-07 08:05:29.601709] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.585 [2024-12-07 08:05:29.601720] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.585 [2024-12-07 08:05:29.601728] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.585 [2024-12-07 08:05:29.601855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.585 [2024-12-07 08:05:29.602326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.585 [2024-12-07 08:05:29.602339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.153 08:05:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:19.153 08:05:30 -- common/autotest_common.sh@862 -- # return 0 00:14:19.153 08:05:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:19.153 08:05:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:19.153 08:05:30 -- common/autotest_common.sh@10 -- # set +x 00:14:19.153 08:05:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.153 08:05:30 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:19.411 [2024-12-07 08:05:30.640314] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.411 08:05:30 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:19.669 08:05:30 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:19.669 08:05:30 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:20.236 08:05:31 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:20.236 08:05:31 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:20.236 08:05:31 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:20.494 08:05:31 -- target/nvmf_lvol.sh@29 -- # lvs=85b2df0c-851e-4390-9f95-12dfd87d45a8 00:14:20.494 08:05:31 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 85b2df0c-851e-4390-9f95-12dfd87d45a8 lvol 20 00:14:20.752 08:05:31 -- target/nvmf_lvol.sh@32 -- # lvol=5ef117f6-f2f4-4dd9-be7e-609445962dae 00:14:20.752 08:05:31 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:21.014 08:05:32 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5ef117f6-f2f4-4dd9-be7e-609445962dae 00:14:21.292 08:05:32 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:21.564 [2024-12-07 08:05:32.627844] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.564 08:05:32 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:21.823 08:05:32 -- target/nvmf_lvol.sh@42 -- # perf_pid=83351 00:14:21.823 08:05:32 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:21.823 08:05:32 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:22.758 08:05:33 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 5ef117f6-f2f4-4dd9-be7e-609445962dae MY_SNAPSHOT 00:14:23.015 08:05:34 -- target/nvmf_lvol.sh@47 -- # snapshot=24a2cca7-1ed6-4163-9625-816e48cc1255 00:14:23.015 08:05:34 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 5ef117f6-f2f4-4dd9-be7e-609445962dae 30 00:14:23.581 08:05:34 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 24a2cca7-1ed6-4163-9625-816e48cc1255 MY_CLONE 00:14:23.839 08:05:34 -- target/nvmf_lvol.sh@49 -- # clone=51d951f3-547e-4b2a-b29f-33348205303d 00:14:23.839 08:05:34 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 51d951f3-547e-4b2a-b29f-33348205303d 00:14:24.404 08:05:35 -- target/nvmf_lvol.sh@53 -- # wait 83351 00:14:32.510 Initializing NVMe Controllers 00:14:32.510 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:32.510 Controller IO queue size 128, less than required. 00:14:32.510 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:32.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:32.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:32.510 Initialization complete. Launching workers. 00:14:32.510 ======================================================== 00:14:32.510 Latency(us) 00:14:32.510 Device Information : IOPS MiB/s Average min max 00:14:32.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10280.50 40.16 12456.45 1471.75 59300.26 00:14:32.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10253.20 40.05 12484.54 1472.47 61729.25 00:14:32.510 ======================================================== 00:14:32.510 Total : 20533.70 80.21 12470.47 1471.75 61729.25 00:14:32.510 00:14:32.510 08:05:43 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:32.510 08:05:43 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5ef117f6-f2f4-4dd9-be7e-609445962dae 00:14:32.511 08:05:43 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 85b2df0c-851e-4390-9f95-12dfd87d45a8 00:14:32.769 08:05:43 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:32.769 08:05:43 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:32.769 08:05:43 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:32.769 08:05:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:32.769 08:05:43 -- nvmf/common.sh@116 -- # sync 00:14:32.769 08:05:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:32.769 08:05:43 -- nvmf/common.sh@119 -- # set +e 00:14:32.769 08:05:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:32.769 08:05:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:32.769 rmmod nvme_tcp 00:14:32.769 rmmod nvme_fabrics 00:14:32.769 rmmod nvme_keyring 00:14:32.769 08:05:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:33.028 08:05:44 -- nvmf/common.sh@123 -- # set -e 00:14:33.028 08:05:44 -- nvmf/common.sh@124 -- # return 0 00:14:33.028 08:05:44 -- nvmf/common.sh@477 -- # '[' -n 83203 ']' 00:14:33.028 08:05:44 -- nvmf/common.sh@478 -- # killprocess 83203 00:14:33.028 08:05:44 -- common/autotest_common.sh@936 -- # '[' -z 83203 ']' 00:14:33.028 08:05:44 -- common/autotest_common.sh@940 -- # kill -0 83203 00:14:33.028 08:05:44 -- common/autotest_common.sh@941 -- # uname 00:14:33.028 08:05:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:33.028 08:05:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83203 00:14:33.028 08:05:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:33.028 08:05:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:33.028 killing process with pid 83203 00:14:33.028 08:05:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83203' 00:14:33.028 08:05:44 -- common/autotest_common.sh@955 -- # kill 83203 00:14:33.028 08:05:44 -- common/autotest_common.sh@960 -- # wait 83203 00:14:33.287 08:05:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:33.287 08:05:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:33.287 08:05:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:33.287 08:05:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:33.287 08:05:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:33.287 08:05:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.287 08:05:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.287 08:05:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.287 08:05:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:33.287 00:14:33.287 real 0m15.620s 00:14:33.287 user 1m5.444s 00:14:33.287 sys 0m3.586s 00:14:33.287 08:05:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:33.287 ************************************ 00:14:33.287 END TEST nvmf_lvol 00:14:33.287 ************************************ 00:14:33.287 08:05:44 -- common/autotest_common.sh@10 -- # set +x 00:14:33.287 08:05:44 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:33.287 08:05:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:33.287 08:05:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:33.287 08:05:44 -- common/autotest_common.sh@10 -- # set +x 00:14:33.287 ************************************ 00:14:33.287 START TEST nvmf_lvs_grow 00:14:33.287 ************************************ 00:14:33.287 08:05:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:33.547 * Looking for test storage... 00:14:33.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:33.547 08:05:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:33.547 08:05:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:33.547 08:05:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:33.547 08:05:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:33.547 08:05:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:33.547 08:05:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:33.547 08:05:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:33.547 08:05:44 -- scripts/common.sh@335 -- # IFS=.-: 00:14:33.547 08:05:44 -- scripts/common.sh@335 -- # read -ra ver1 00:14:33.547 08:05:44 -- scripts/common.sh@336 -- # IFS=.-: 00:14:33.547 08:05:44 -- scripts/common.sh@336 -- # read -ra ver2 00:14:33.547 08:05:44 -- scripts/common.sh@337 -- # local 'op=<' 00:14:33.547 08:05:44 -- scripts/common.sh@339 -- # ver1_l=2 00:14:33.547 08:05:44 -- scripts/common.sh@340 -- # ver2_l=1 00:14:33.547 08:05:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:33.547 08:05:44 -- scripts/common.sh@343 -- # case "$op" in 00:14:33.547 08:05:44 -- scripts/common.sh@344 -- # : 1 00:14:33.547 08:05:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:33.547 08:05:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:33.547 08:05:44 -- scripts/common.sh@364 -- # decimal 1 00:14:33.547 08:05:44 -- scripts/common.sh@352 -- # local d=1 00:14:33.547 08:05:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:33.547 08:05:44 -- scripts/common.sh@354 -- # echo 1 00:14:33.547 08:05:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:33.547 08:05:44 -- scripts/common.sh@365 -- # decimal 2 00:14:33.547 08:05:44 -- scripts/common.sh@352 -- # local d=2 00:14:33.547 08:05:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:33.547 08:05:44 -- scripts/common.sh@354 -- # echo 2 00:14:33.547 08:05:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:33.547 08:05:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:33.547 08:05:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:33.547 08:05:44 -- scripts/common.sh@367 -- # return 0 00:14:33.547 08:05:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:33.547 08:05:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:33.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.547 --rc genhtml_branch_coverage=1 00:14:33.547 --rc genhtml_function_coverage=1 00:14:33.547 --rc genhtml_legend=1 00:14:33.547 --rc geninfo_all_blocks=1 00:14:33.547 --rc geninfo_unexecuted_blocks=1 00:14:33.547 00:14:33.547 ' 00:14:33.547 08:05:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:33.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.547 --rc genhtml_branch_coverage=1 00:14:33.547 --rc genhtml_function_coverage=1 00:14:33.547 --rc genhtml_legend=1 00:14:33.547 --rc geninfo_all_blocks=1 00:14:33.547 --rc geninfo_unexecuted_blocks=1 00:14:33.547 00:14:33.547 ' 00:14:33.547 08:05:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:33.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.547 --rc genhtml_branch_coverage=1 00:14:33.547 --rc genhtml_function_coverage=1 00:14:33.547 --rc genhtml_legend=1 00:14:33.547 --rc geninfo_all_blocks=1 00:14:33.547 --rc geninfo_unexecuted_blocks=1 00:14:33.547 00:14:33.547 ' 00:14:33.547 08:05:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:33.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.547 --rc genhtml_branch_coverage=1 00:14:33.547 --rc genhtml_function_coverage=1 00:14:33.547 --rc genhtml_legend=1 00:14:33.547 --rc geninfo_all_blocks=1 00:14:33.547 --rc geninfo_unexecuted_blocks=1 00:14:33.547 00:14:33.547 ' 00:14:33.547 08:05:44 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:33.547 08:05:44 -- nvmf/common.sh@7 -- # uname -s 00:14:33.547 08:05:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.547 08:05:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.547 08:05:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.547 08:05:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.547 08:05:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.547 08:05:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.547 08:05:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.547 08:05:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.547 08:05:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.547 08:05:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.547 08:05:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:14:33.547 08:05:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:14:33.547 08:05:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.547 08:05:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.547 08:05:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:33.547 08:05:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:33.547 08:05:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.547 08:05:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.547 08:05:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.547 08:05:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.547 08:05:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.547 08:05:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.547 08:05:44 -- paths/export.sh@5 -- # export PATH 00:14:33.547 08:05:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.547 08:05:44 -- nvmf/common.sh@46 -- # : 0 00:14:33.547 08:05:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:33.547 08:05:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:33.547 08:05:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:33.547 08:05:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.547 08:05:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.547 08:05:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:33.547 08:05:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:33.547 08:05:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:33.547 08:05:44 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:33.547 08:05:44 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:33.547 08:05:44 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:33.547 08:05:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:33.547 08:05:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.547 08:05:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:33.547 08:05:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:33.547 08:05:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:33.547 08:05:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.548 08:05:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.548 08:05:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.548 08:05:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:33.548 08:05:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:33.548 08:05:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:33.548 08:05:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:33.548 08:05:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:33.548 08:05:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:33.548 08:05:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:33.548 08:05:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:33.548 08:05:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:33.548 08:05:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:33.548 08:05:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:33.548 08:05:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:33.548 08:05:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:33.548 08:05:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:33.548 08:05:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:33.548 08:05:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:33.548 08:05:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:33.548 08:05:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:33.548 08:05:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:33.548 08:05:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:33.548 Cannot find device "nvmf_tgt_br" 00:14:33.548 08:05:44 -- nvmf/common.sh@154 -- # true 00:14:33.548 08:05:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:33.548 Cannot find device "nvmf_tgt_br2" 00:14:33.548 08:05:44 -- nvmf/common.sh@155 -- # true 00:14:33.548 08:05:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:33.548 08:05:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:33.548 Cannot find device "nvmf_tgt_br" 00:14:33.548 08:05:44 -- nvmf/common.sh@157 -- # true 00:14:33.548 08:05:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:33.548 Cannot find device "nvmf_tgt_br2" 00:14:33.548 08:05:44 -- nvmf/common.sh@158 -- # true 00:14:33.548 08:05:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:33.806 08:05:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:33.806 08:05:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:33.806 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:33.806 08:05:44 -- nvmf/common.sh@161 -- # true 00:14:33.806 08:05:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:33.807 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:33.807 08:05:44 -- nvmf/common.sh@162 -- # true 00:14:33.807 08:05:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:33.807 08:05:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:33.807 08:05:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:33.807 08:05:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:33.807 08:05:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:33.807 08:05:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:33.807 08:05:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:33.807 08:05:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:33.807 08:05:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:33.807 08:05:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:33.807 08:05:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:33.807 08:05:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:33.807 08:05:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:33.807 08:05:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:33.807 08:05:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:33.807 08:05:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:33.807 08:05:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:33.807 08:05:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:33.807 08:05:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:33.807 08:05:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:33.807 08:05:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:33.807 08:05:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:33.807 08:05:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:33.807 08:05:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:33.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:33.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:14:33.807 00:14:33.807 --- 10.0.0.2 ping statistics --- 00:14:33.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.807 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:33.807 08:05:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:33.807 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:33.807 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.029 ms 00:14:33.807 00:14:33.807 --- 10.0.0.3 ping statistics --- 00:14:33.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.807 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:14:33.807 08:05:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:33.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:33.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:14:33.807 00:14:33.807 --- 10.0.0.1 ping statistics --- 00:14:33.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.807 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:14:33.807 08:05:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:33.807 08:05:45 -- nvmf/common.sh@421 -- # return 0 00:14:33.807 08:05:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:33.807 08:05:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:33.807 08:05:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:33.807 08:05:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:33.807 08:05:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:33.807 08:05:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:33.807 08:05:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:33.807 08:05:45 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:33.807 08:05:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:33.807 08:05:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:33.807 08:05:45 -- common/autotest_common.sh@10 -- # set +x 00:14:33.807 08:05:45 -- nvmf/common.sh@469 -- # nvmfpid=83727 00:14:33.807 08:05:45 -- nvmf/common.sh@470 -- # waitforlisten 83727 00:14:33.807 08:05:45 -- common/autotest_common.sh@829 -- # '[' -z 83727 ']' 00:14:33.807 08:05:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:33.807 08:05:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.807 08:05:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.807 08:05:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.807 08:05:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.807 08:05:45 -- common/autotest_common.sh@10 -- # set +x 00:14:34.067 [2024-12-07 08:05:45.120261] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:34.067 [2024-12-07 08:05:45.120338] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.067 [2024-12-07 08:05:45.251561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.067 [2024-12-07 08:05:45.338210] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:34.067 [2024-12-07 08:05:45.338407] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.067 [2024-12-07 08:05:45.338421] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.067 [2024-12-07 08:05:45.338430] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.067 [2024-12-07 08:05:45.338463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.001 08:05:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.001 08:05:46 -- common/autotest_common.sh@862 -- # return 0 00:14:35.001 08:05:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:35.001 08:05:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:35.001 08:05:46 -- common/autotest_common.sh@10 -- # set +x 00:14:35.001 08:05:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.001 08:05:46 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:35.259 [2024-12-07 08:05:46.366797] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.259 08:05:46 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:35.259 08:05:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:35.259 08:05:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:35.259 08:05:46 -- common/autotest_common.sh@10 -- # set +x 00:14:35.259 ************************************ 00:14:35.259 START TEST lvs_grow_clean 00:14:35.259 ************************************ 00:14:35.259 08:05:46 -- common/autotest_common.sh@1114 -- # lvs_grow 00:14:35.259 08:05:46 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:35.259 08:05:46 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:35.259 08:05:46 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:35.259 08:05:46 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:35.259 08:05:46 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:35.259 08:05:46 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:35.259 08:05:46 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:35.259 08:05:46 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:35.259 08:05:46 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:35.517 08:05:46 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:35.517 08:05:46 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:35.776 08:05:46 -- target/nvmf_lvs_grow.sh@28 -- # lvs=62d399e1-7087-442e-8974-7a2569497614 00:14:35.776 08:05:46 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62d399e1-7087-442e-8974-7a2569497614 00:14:35.776 08:05:46 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:36.034 08:05:47 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:36.034 08:05:47 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:36.034 08:05:47 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 62d399e1-7087-442e-8974-7a2569497614 lvol 150 00:14:36.292 08:05:47 -- target/nvmf_lvs_grow.sh@33 -- # lvol=d7456f20-d6b9-4e9d-ae45-1ee165bec8ee 00:14:36.292 08:05:47 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:36.292 08:05:47 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:36.550 [2024-12-07 08:05:47.638941] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:36.550 [2024-12-07 08:05:47.639013] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:36.550 true 00:14:36.550 08:05:47 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62d399e1-7087-442e-8974-7a2569497614 00:14:36.551 08:05:47 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:36.809 08:05:47 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:36.809 08:05:47 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:37.068 08:05:48 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d7456f20-d6b9-4e9d-ae45-1ee165bec8ee 00:14:37.068 08:05:48 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:37.326 [2024-12-07 08:05:48.567565] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.326 08:05:48 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:37.585 08:05:48 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:37.585 08:05:48 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83883 00:14:37.585 08:05:48 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:37.585 08:05:48 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83883 /var/tmp/bdevperf.sock 00:14:37.585 08:05:48 -- common/autotest_common.sh@829 -- # '[' -z 83883 ']' 00:14:37.585 08:05:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:37.585 08:05:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:37.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:37.585 08:05:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:37.585 08:05:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:37.585 08:05:48 -- common/autotest_common.sh@10 -- # set +x 00:14:37.844 [2024-12-07 08:05:48.882501] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:37.844 [2024-12-07 08:05:48.882593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83883 ] 00:14:37.844 [2024-12-07 08:05:49.019981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.844 [2024-12-07 08:05:49.087046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.781 08:05:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:38.781 08:05:49 -- common/autotest_common.sh@862 -- # return 0 00:14:38.781 08:05:49 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:39.039 Nvme0n1 00:14:39.039 08:05:50 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:39.298 [ 00:14:39.298 { 00:14:39.298 "aliases": [ 00:14:39.298 "d7456f20-d6b9-4e9d-ae45-1ee165bec8ee" 00:14:39.298 ], 00:14:39.298 "assigned_rate_limits": { 00:14:39.298 "r_mbytes_per_sec": 0, 00:14:39.298 "rw_ios_per_sec": 0, 00:14:39.298 "rw_mbytes_per_sec": 0, 00:14:39.298 "w_mbytes_per_sec": 0 00:14:39.298 }, 00:14:39.298 "block_size": 4096, 00:14:39.298 "claimed": false, 00:14:39.298 "driver_specific": { 00:14:39.298 "mp_policy": "active_passive", 00:14:39.298 "nvme": [ 00:14:39.298 { 00:14:39.298 "ctrlr_data": { 00:14:39.298 "ana_reporting": false, 00:14:39.298 "cntlid": 1, 00:14:39.298 "firmware_revision": "24.01.1", 00:14:39.298 "model_number": "SPDK bdev Controller", 00:14:39.298 "multi_ctrlr": true, 00:14:39.298 "oacs": { 00:14:39.298 "firmware": 0, 00:14:39.298 "format": 0, 00:14:39.298 "ns_manage": 0, 00:14:39.298 "security": 0 00:14:39.298 }, 00:14:39.298 "serial_number": "SPDK0", 00:14:39.298 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:39.298 "vendor_id": "0x8086" 00:14:39.298 }, 00:14:39.298 "ns_data": { 00:14:39.298 "can_share": true, 00:14:39.298 "id": 1 00:14:39.298 }, 00:14:39.298 "trid": { 00:14:39.298 "adrfam": "IPv4", 00:14:39.298 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:39.298 "traddr": "10.0.0.2", 00:14:39.298 "trsvcid": "4420", 00:14:39.298 "trtype": "TCP" 00:14:39.298 }, 00:14:39.298 "vs": { 00:14:39.298 "nvme_version": "1.3" 00:14:39.298 } 00:14:39.298 } 00:14:39.298 ] 00:14:39.298 }, 00:14:39.298 "name": "Nvme0n1", 00:14:39.298 "num_blocks": 38912, 00:14:39.298 "product_name": "NVMe disk", 00:14:39.298 "supported_io_types": { 00:14:39.298 "abort": true, 00:14:39.299 "compare": true, 00:14:39.299 "compare_and_write": true, 00:14:39.299 "flush": true, 00:14:39.299 "nvme_admin": true, 00:14:39.299 "nvme_io": true, 00:14:39.299 "read": true, 00:14:39.299 "reset": true, 00:14:39.299 "unmap": true, 00:14:39.299 "write": true, 00:14:39.299 "write_zeroes": true 00:14:39.299 }, 00:14:39.299 "uuid": "d7456f20-d6b9-4e9d-ae45-1ee165bec8ee", 00:14:39.299 "zoned": false 00:14:39.299 } 00:14:39.299 ] 00:14:39.299 08:05:50 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83937 00:14:39.299 08:05:50 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:39.299 08:05:50 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:39.299 Running I/O for 10 seconds... 00:14:40.676 Latency(us) 00:14:40.676 [2024-12-07T08:05:51.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.676 [2024-12-07T08:05:51.952Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.676 Nvme0n1 : 1.00 8908.00 34.80 0.00 0.00 0.00 0.00 0.00 00:14:40.676 [2024-12-07T08:05:51.952Z] =================================================================================================================== 00:14:40.676 [2024-12-07T08:05:51.952Z] Total : 8908.00 34.80 0.00 0.00 0.00 0.00 0.00 00:14:40.676 00:14:41.243 08:05:52 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 62d399e1-7087-442e-8974-7a2569497614 00:14:41.501 [2024-12-07T08:05:52.777Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.501 Nvme0n1 : 2.00 8661.50 33.83 0.00 0.00 0.00 0.00 0.00 00:14:41.501 [2024-12-07T08:05:52.778Z] =================================================================================================================== 00:14:41.502 [2024-12-07T08:05:52.778Z] Total : 8661.50 33.83 0.00 0.00 0.00 0.00 0.00 00:14:41.502 00:14:41.502 true 00:14:41.760 08:05:52 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62d399e1-7087-442e-8974-7a2569497614 00:14:41.760 08:05:52 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:42.018 08:05:53 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:42.018 08:05:53 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:42.018 08:05:53 -- target/nvmf_lvs_grow.sh@65 -- # wait 83937 00:14:42.584 [2024-12-07T08:05:53.860Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.584 Nvme0n1 : 3.00 9070.00 35.43 0.00 0.00 0.00 0.00 0.00 00:14:42.584 [2024-12-07T08:05:53.860Z] =================================================================================================================== 00:14:42.584 [2024-12-07T08:05:53.860Z] Total : 9070.00 35.43 0.00 0.00 0.00 0.00 0.00 00:14:42.584 00:14:43.517 [2024-12-07T08:05:54.793Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.517 Nvme0n1 : 4.00 9157.25 35.77 0.00 0.00 0.00 0.00 0.00 00:14:43.517 [2024-12-07T08:05:54.793Z] =================================================================================================================== 00:14:43.517 [2024-12-07T08:05:54.793Z] Total : 9157.25 35.77 0.00 0.00 0.00 0.00 0.00 00:14:43.517 00:14:44.450 [2024-12-07T08:05:55.726Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.450 Nvme0n1 : 5.00 9084.20 35.49 0.00 0.00 0.00 0.00 0.00 00:14:44.450 [2024-12-07T08:05:55.726Z] =================================================================================================================== 00:14:44.450 [2024-12-07T08:05:55.726Z] Total : 9084.20 35.49 0.00 0.00 0.00 0.00 0.00 00:14:44.450 00:14:45.407 [2024-12-07T08:05:56.683Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.407 Nvme0n1 : 6.00 9152.33 35.75 0.00 0.00 0.00 0.00 0.00 00:14:45.407 [2024-12-07T08:05:56.683Z] =================================================================================================================== 00:14:45.407 [2024-12-07T08:05:56.683Z] Total : 9152.33 35.75 0.00 0.00 0.00 0.00 0.00 00:14:45.407 00:14:46.355 [2024-12-07T08:05:57.631Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.355 Nvme0n1 : 7.00 8938.14 34.91 0.00 0.00 0.00 0.00 0.00 00:14:46.355 [2024-12-07T08:05:57.631Z] =================================================================================================================== 00:14:46.355 [2024-12-07T08:05:57.631Z] Total : 8938.14 34.91 0.00 0.00 0.00 0.00 0.00 00:14:46.355 00:14:47.289 [2024-12-07T08:05:58.565Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.289 Nvme0n1 : 8.00 8908.00 34.80 0.00 0.00 0.00 0.00 0.00 00:14:47.289 [2024-12-07T08:05:58.565Z] =================================================================================================================== 00:14:47.289 [2024-12-07T08:05:58.565Z] Total : 8908.00 34.80 0.00 0.00 0.00 0.00 0.00 00:14:47.290 00:14:48.666 [2024-12-07T08:05:59.942Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.666 Nvme0n1 : 9.00 8853.22 34.58 0.00 0.00 0.00 0.00 0.00 00:14:48.666 [2024-12-07T08:05:59.942Z] =================================================================================================================== 00:14:48.666 [2024-12-07T08:05:59.942Z] Total : 8853.22 34.58 0.00 0.00 0.00 0.00 0.00 00:14:48.666 00:14:49.602 [2024-12-07T08:06:00.878Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.602 Nvme0n1 : 10.00 8804.00 34.39 0.00 0.00 0.00 0.00 0.00 00:14:49.602 [2024-12-07T08:06:00.878Z] =================================================================================================================== 00:14:49.602 [2024-12-07T08:06:00.878Z] Total : 8804.00 34.39 0.00 0.00 0.00 0.00 0.00 00:14:49.602 00:14:49.602 00:14:49.602 Latency(us) 00:14:49.602 [2024-12-07T08:06:00.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.602 [2024-12-07T08:06:00.878Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:49.602 Nvme0n1 : 10.01 8806.60 34.40 0.00 0.00 14525.24 6404.65 175398.17 00:14:49.602 [2024-12-07T08:06:00.878Z] =================================================================================================================== 00:14:49.602 [2024-12-07T08:06:00.878Z] Total : 8806.60 34.40 0.00 0.00 14525.24 6404.65 175398.17 00:14:49.602 0 00:14:49.602 08:06:00 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83883 00:14:49.602 08:06:00 -- common/autotest_common.sh@936 -- # '[' -z 83883 ']' 00:14:49.602 08:06:00 -- common/autotest_common.sh@940 -- # kill -0 83883 00:14:49.602 08:06:00 -- common/autotest_common.sh@941 -- # uname 00:14:49.602 08:06:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:49.602 08:06:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83883 00:14:49.602 08:06:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:49.603 08:06:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:49.603 killing process with pid 83883 00:14:49.603 08:06:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83883' 00:14:49.603 Received shutdown signal, test time was about 10.000000 seconds 00:14:49.603 00:14:49.603 Latency(us) 00:14:49.603 [2024-12-07T08:06:00.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.603 [2024-12-07T08:06:00.879Z] =================================================================================================================== 00:14:49.603 [2024-12-07T08:06:00.879Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:49.603 08:06:00 -- common/autotest_common.sh@955 -- # kill 83883 00:14:49.603 08:06:00 -- common/autotest_common.sh@960 -- # wait 83883 00:14:49.603 08:06:00 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:49.862 08:06:01 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62d399e1-7087-442e-8974-7a2569497614 00:14:49.862 08:06:01 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:50.121 08:06:01 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:50.121 08:06:01 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:50.121 08:06:01 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:50.379 [2024-12-07 08:06:01.581106] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:50.379 08:06:01 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62d399e1-7087-442e-8974-7a2569497614 00:14:50.379 08:06:01 -- common/autotest_common.sh@650 -- # local es=0 00:14:50.379 08:06:01 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62d399e1-7087-442e-8974-7a2569497614 00:14:50.379 08:06:01 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:50.379 08:06:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:50.379 08:06:01 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:50.379 08:06:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:50.379 08:06:01 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:50.379 08:06:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:50.379 08:06:01 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:50.379 08:06:01 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:50.379 08:06:01 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62d399e1-7087-442e-8974-7a2569497614 00:14:50.637 2024/12/07 08:06:01 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:62d399e1-7087-442e-8974-7a2569497614], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:50.637 request: 00:14:50.637 { 00:14:50.637 "method": "bdev_lvol_get_lvstores", 00:14:50.637 "params": { 00:14:50.637 "uuid": "62d399e1-7087-442e-8974-7a2569497614" 00:14:50.637 } 00:14:50.637 } 00:14:50.637 Got JSON-RPC error response 00:14:50.637 GoRPCClient: error on JSON-RPC call 00:14:50.637 08:06:01 -- common/autotest_common.sh@653 -- # es=1 00:14:50.637 08:06:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:50.637 08:06:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:50.637 08:06:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:50.637 08:06:01 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:50.895 aio_bdev 00:14:50.895 08:06:02 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev d7456f20-d6b9-4e9d-ae45-1ee165bec8ee 00:14:50.895 08:06:02 -- common/autotest_common.sh@897 -- # local bdev_name=d7456f20-d6b9-4e9d-ae45-1ee165bec8ee 00:14:50.895 08:06:02 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:50.895 08:06:02 -- common/autotest_common.sh@899 -- # local i 00:14:50.895 08:06:02 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:50.895 08:06:02 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:50.895 08:06:02 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:51.153 08:06:02 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d7456f20-d6b9-4e9d-ae45-1ee165bec8ee -t 2000 00:14:51.412 [ 00:14:51.412 { 00:14:51.412 "aliases": [ 00:14:51.412 "lvs/lvol" 00:14:51.412 ], 00:14:51.412 "assigned_rate_limits": { 00:14:51.412 "r_mbytes_per_sec": 0, 00:14:51.412 "rw_ios_per_sec": 0, 00:14:51.412 "rw_mbytes_per_sec": 0, 00:14:51.412 "w_mbytes_per_sec": 0 00:14:51.412 }, 00:14:51.412 "block_size": 4096, 00:14:51.412 "claimed": false, 00:14:51.412 "driver_specific": { 00:14:51.412 "lvol": { 00:14:51.412 "base_bdev": "aio_bdev", 00:14:51.412 "clone": false, 00:14:51.412 "esnap_clone": false, 00:14:51.412 "lvol_store_uuid": "62d399e1-7087-442e-8974-7a2569497614", 00:14:51.412 "snapshot": false, 00:14:51.412 "thin_provision": false 00:14:51.412 } 00:14:51.412 }, 00:14:51.412 "name": "d7456f20-d6b9-4e9d-ae45-1ee165bec8ee", 00:14:51.412 "num_blocks": 38912, 00:14:51.412 "product_name": "Logical Volume", 00:14:51.412 "supported_io_types": { 00:14:51.412 "abort": false, 00:14:51.412 "compare": false, 00:14:51.412 "compare_and_write": false, 00:14:51.412 "flush": false, 00:14:51.412 "nvme_admin": false, 00:14:51.412 "nvme_io": false, 00:14:51.412 "read": true, 00:14:51.412 "reset": true, 00:14:51.412 "unmap": true, 00:14:51.412 "write": true, 00:14:51.412 "write_zeroes": true 00:14:51.412 }, 00:14:51.412 "uuid": "d7456f20-d6b9-4e9d-ae45-1ee165bec8ee", 00:14:51.412 "zoned": false 00:14:51.412 } 00:14:51.412 ] 00:14:51.412 08:06:02 -- common/autotest_common.sh@905 -- # return 0 00:14:51.412 08:06:02 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62d399e1-7087-442e-8974-7a2569497614 00:14:51.412 08:06:02 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:51.671 08:06:02 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:51.671 08:06:02 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62d399e1-7087-442e-8974-7a2569497614 00:14:51.671 08:06:02 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:51.929 08:06:03 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:51.929 08:06:03 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d7456f20-d6b9-4e9d-ae45-1ee165bec8ee 00:14:52.187 08:06:03 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 62d399e1-7087-442e-8974-7a2569497614 00:14:52.445 08:06:03 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:52.703 08:06:03 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:52.961 00:14:52.961 real 0m17.758s 00:14:52.961 user 0m17.136s 00:14:52.961 sys 0m2.186s 00:14:52.961 08:06:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:52.961 08:06:04 -- common/autotest_common.sh@10 -- # set +x 00:14:52.961 ************************************ 00:14:52.961 END TEST lvs_grow_clean 00:14:52.961 ************************************ 00:14:52.961 08:06:04 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:52.961 08:06:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:52.961 08:06:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:52.961 08:06:04 -- common/autotest_common.sh@10 -- # set +x 00:14:52.961 ************************************ 00:14:52.961 START TEST lvs_grow_dirty 00:14:52.961 ************************************ 00:14:52.961 08:06:04 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:14:52.961 08:06:04 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:52.961 08:06:04 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:52.961 08:06:04 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:52.961 08:06:04 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:52.961 08:06:04 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:52.961 08:06:04 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:52.961 08:06:04 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:53.219 08:06:04 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:53.220 08:06:04 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:53.478 08:06:04 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:53.478 08:06:04 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:53.737 08:06:04 -- target/nvmf_lvs_grow.sh@28 -- # lvs=9eddc3aa-b882-4670-bb4c-dea43ee50a9a 00:14:53.737 08:06:04 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eddc3aa-b882-4670-bb4c-dea43ee50a9a 00:14:53.737 08:06:04 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:53.737 08:06:04 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:53.737 08:06:04 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:53.737 08:06:04 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9eddc3aa-b882-4670-bb4c-dea43ee50a9a lvol 150 00:14:53.996 08:06:05 -- target/nvmf_lvs_grow.sh@33 -- # lvol=b1f46f49-56d8-48ec-8b5d-b7074569c12b 00:14:53.996 08:06:05 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:53.996 08:06:05 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:54.255 [2024-12-07 08:06:05.441234] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:54.255 [2024-12-07 08:06:05.441299] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:54.255 true 00:14:54.255 08:06:05 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eddc3aa-b882-4670-bb4c-dea43ee50a9a 00:14:54.255 08:06:05 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:54.512 08:06:05 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:54.512 08:06:05 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:54.770 08:06:06 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b1f46f49-56d8-48ec-8b5d-b7074569c12b 00:14:55.027 08:06:06 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:55.285 08:06:06 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:55.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:55.544 08:06:06 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=84325 00:14:55.544 08:06:06 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:55.544 08:06:06 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:55.544 08:06:06 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 84325 /var/tmp/bdevperf.sock 00:14:55.544 08:06:06 -- common/autotest_common.sh@829 -- # '[' -z 84325 ']' 00:14:55.544 08:06:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:55.544 08:06:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:55.544 08:06:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:55.544 08:06:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:55.544 08:06:06 -- common/autotest_common.sh@10 -- # set +x 00:14:55.544 [2024-12-07 08:06:06.668427] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:55.544 [2024-12-07 08:06:06.668685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84325 ] 00:14:55.544 [2024-12-07 08:06:06.804429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.804 [2024-12-07 08:06:06.872600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.740 08:06:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:56.740 08:06:07 -- common/autotest_common.sh@862 -- # return 0 00:14:56.740 08:06:07 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:56.740 Nvme0n1 00:14:56.740 08:06:07 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:56.999 [ 00:14:56.999 { 00:14:56.999 "aliases": [ 00:14:56.999 "b1f46f49-56d8-48ec-8b5d-b7074569c12b" 00:14:56.999 ], 00:14:56.999 "assigned_rate_limits": { 00:14:56.999 "r_mbytes_per_sec": 0, 00:14:56.999 "rw_ios_per_sec": 0, 00:14:56.999 "rw_mbytes_per_sec": 0, 00:14:56.999 "w_mbytes_per_sec": 0 00:14:56.999 }, 00:14:56.999 "block_size": 4096, 00:14:56.999 "claimed": false, 00:14:56.999 "driver_specific": { 00:14:56.999 "mp_policy": "active_passive", 00:14:56.999 "nvme": [ 00:14:56.999 { 00:14:56.999 "ctrlr_data": { 00:14:56.999 "ana_reporting": false, 00:14:56.999 "cntlid": 1, 00:14:56.999 "firmware_revision": "24.01.1", 00:14:56.999 "model_number": "SPDK bdev Controller", 00:14:56.999 "multi_ctrlr": true, 00:14:56.999 "oacs": { 00:14:56.999 "firmware": 0, 00:14:56.999 "format": 0, 00:14:56.999 "ns_manage": 0, 00:14:56.999 "security": 0 00:14:56.999 }, 00:14:56.999 "serial_number": "SPDK0", 00:14:56.999 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:56.999 "vendor_id": "0x8086" 00:14:56.999 }, 00:14:56.999 "ns_data": { 00:14:56.999 "can_share": true, 00:14:56.999 "id": 1 00:14:56.999 }, 00:14:56.999 "trid": { 00:14:56.999 "adrfam": "IPv4", 00:14:56.999 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:56.999 "traddr": "10.0.0.2", 00:14:56.999 "trsvcid": "4420", 00:14:56.999 "trtype": "TCP" 00:14:56.999 }, 00:14:56.999 "vs": { 00:14:56.999 "nvme_version": "1.3" 00:14:56.999 } 00:14:56.999 } 00:14:56.999 ] 00:14:56.999 }, 00:14:56.999 "name": "Nvme0n1", 00:14:56.999 "num_blocks": 38912, 00:14:56.999 "product_name": "NVMe disk", 00:14:56.999 "supported_io_types": { 00:14:56.999 "abort": true, 00:14:56.999 "compare": true, 00:14:56.999 "compare_and_write": true, 00:14:56.999 "flush": true, 00:14:56.999 "nvme_admin": true, 00:14:56.999 "nvme_io": true, 00:14:56.999 "read": true, 00:14:56.999 "reset": true, 00:14:56.999 "unmap": true, 00:14:56.999 "write": true, 00:14:56.999 "write_zeroes": true 00:14:56.999 }, 00:14:56.999 "uuid": "b1f46f49-56d8-48ec-8b5d-b7074569c12b", 00:14:56.999 "zoned": false 00:14:56.999 } 00:14:56.999 ] 00:14:56.999 08:06:08 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=84371 00:14:57.000 08:06:08 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:57.000 08:06:08 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:57.258 Running I/O for 10 seconds... 00:14:58.195 Latency(us) 00:14:58.195 [2024-12-07T08:06:09.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.195 [2024-12-07T08:06:09.471Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.195 Nvme0n1 : 1.00 7212.00 28.17 0.00 0.00 0.00 0.00 0.00 00:14:58.195 [2024-12-07T08:06:09.471Z] =================================================================================================================== 00:14:58.195 [2024-12-07T08:06:09.471Z] Total : 7212.00 28.17 0.00 0.00 0.00 0.00 0.00 00:14:58.195 00:14:59.131 08:06:10 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9eddc3aa-b882-4670-bb4c-dea43ee50a9a 00:14:59.131 [2024-12-07T08:06:10.407Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.131 Nvme0n1 : 2.00 7256.00 28.34 0.00 0.00 0.00 0.00 0.00 00:14:59.132 [2024-12-07T08:06:10.408Z] =================================================================================================================== 00:14:59.132 [2024-12-07T08:06:10.408Z] Total : 7256.00 28.34 0.00 0.00 0.00 0.00 0.00 00:14:59.132 00:14:59.390 true 00:14:59.390 08:06:10 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eddc3aa-b882-4670-bb4c-dea43ee50a9a 00:14:59.390 08:06:10 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:59.648 08:06:10 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:59.648 08:06:10 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:59.648 08:06:10 -- target/nvmf_lvs_grow.sh@65 -- # wait 84371 00:15:00.215 [2024-12-07T08:06:11.491Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.215 Nvme0n1 : 3.00 7267.00 28.39 0.00 0.00 0.00 0.00 0.00 00:15:00.215 [2024-12-07T08:06:11.491Z] =================================================================================================================== 00:15:00.215 [2024-12-07T08:06:11.491Z] Total : 7267.00 28.39 0.00 0.00 0.00 0.00 0.00 00:15:00.215 00:15:01.152 [2024-12-07T08:06:12.428Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.152 Nvme0n1 : 4.00 7332.25 28.64 0.00 0.00 0.00 0.00 0.00 00:15:01.152 [2024-12-07T08:06:12.428Z] =================================================================================================================== 00:15:01.152 [2024-12-07T08:06:12.428Z] Total : 7332.25 28.64 0.00 0.00 0.00 0.00 0.00 00:15:01.152 00:15:02.108 [2024-12-07T08:06:13.384Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.108 Nvme0n1 : 5.00 7206.00 28.15 0.00 0.00 0.00 0.00 0.00 00:15:02.108 [2024-12-07T08:06:13.384Z] =================================================================================================================== 00:15:02.108 [2024-12-07T08:06:13.384Z] Total : 7206.00 28.15 0.00 0.00 0.00 0.00 0.00 00:15:02.108 00:15:03.482 [2024-12-07T08:06:14.758Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.482 Nvme0n1 : 6.00 7221.33 28.21 0.00 0.00 0.00 0.00 0.00 00:15:03.482 [2024-12-07T08:06:14.758Z] =================================================================================================================== 00:15:03.482 [2024-12-07T08:06:14.758Z] Total : 7221.33 28.21 0.00 0.00 0.00 0.00 0.00 00:15:03.482 00:15:04.416 [2024-12-07T08:06:15.692Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.416 Nvme0n1 : 7.00 7197.86 28.12 0.00 0.00 0.00 0.00 0.00 00:15:04.416 [2024-12-07T08:06:15.692Z] =================================================================================================================== 00:15:04.416 [2024-12-07T08:06:15.692Z] Total : 7197.86 28.12 0.00 0.00 0.00 0.00 0.00 00:15:04.416 00:15:05.349 [2024-12-07T08:06:16.625Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.349 Nvme0n1 : 8.00 7182.75 28.06 0.00 0.00 0.00 0.00 0.00 00:15:05.349 [2024-12-07T08:06:16.625Z] =================================================================================================================== 00:15:05.349 [2024-12-07T08:06:16.625Z] Total : 7182.75 28.06 0.00 0.00 0.00 0.00 0.00 00:15:05.349 00:15:06.285 [2024-12-07T08:06:17.561Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.285 Nvme0n1 : 9.00 7158.11 27.96 0.00 0.00 0.00 0.00 0.00 00:15:06.285 [2024-12-07T08:06:17.561Z] =================================================================================================================== 00:15:06.285 [2024-12-07T08:06:17.561Z] Total : 7158.11 27.96 0.00 0.00 0.00 0.00 0.00 00:15:06.285 00:15:07.221 [2024-12-07T08:06:18.497Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.221 Nvme0n1 : 10.00 7135.20 27.87 0.00 0.00 0.00 0.00 0.00 00:15:07.221 [2024-12-07T08:06:18.497Z] =================================================================================================================== 00:15:07.221 [2024-12-07T08:06:18.497Z] Total : 7135.20 27.87 0.00 0.00 0.00 0.00 0.00 00:15:07.221 00:15:07.221 00:15:07.221 Latency(us) 00:15:07.221 [2024-12-07T08:06:18.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.222 [2024-12-07T08:06:18.498Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.222 Nvme0n1 : 10.00 7144.51 27.91 0.00 0.00 17911.24 5630.14 112483.61 00:15:07.222 [2024-12-07T08:06:18.498Z] =================================================================================================================== 00:15:07.222 [2024-12-07T08:06:18.498Z] Total : 7144.51 27.91 0.00 0.00 17911.24 5630.14 112483.61 00:15:07.222 0 00:15:07.222 08:06:18 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 84325 00:15:07.222 08:06:18 -- common/autotest_common.sh@936 -- # '[' -z 84325 ']' 00:15:07.222 08:06:18 -- common/autotest_common.sh@940 -- # kill -0 84325 00:15:07.222 08:06:18 -- common/autotest_common.sh@941 -- # uname 00:15:07.222 08:06:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:07.222 08:06:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84325 00:15:07.222 killing process with pid 84325 00:15:07.222 Received shutdown signal, test time was about 10.000000 seconds 00:15:07.222 00:15:07.222 Latency(us) 00:15:07.222 [2024-12-07T08:06:18.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.222 [2024-12-07T08:06:18.498Z] =================================================================================================================== 00:15:07.222 [2024-12-07T08:06:18.498Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:07.222 08:06:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:07.222 08:06:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:07.222 08:06:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84325' 00:15:07.222 08:06:18 -- common/autotest_common.sh@955 -- # kill 84325 00:15:07.222 08:06:18 -- common/autotest_common.sh@960 -- # wait 84325 00:15:07.480 08:06:18 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:07.739 08:06:18 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eddc3aa-b882-4670-bb4c-dea43ee50a9a 00:15:07.739 08:06:18 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:07.998 08:06:19 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:07.998 08:06:19 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:07.998 08:06:19 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 83727 00:15:07.998 08:06:19 -- target/nvmf_lvs_grow.sh@74 -- # wait 83727 00:15:07.998 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 83727 Killed "${NVMF_APP[@]}" "$@" 00:15:07.998 08:06:19 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:07.998 08:06:19 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:07.998 08:06:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:07.998 08:06:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:07.998 08:06:19 -- common/autotest_common.sh@10 -- # set +x 00:15:07.998 08:06:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:07.998 08:06:19 -- nvmf/common.sh@469 -- # nvmfpid=84523 00:15:07.998 08:06:19 -- nvmf/common.sh@470 -- # waitforlisten 84523 00:15:07.998 08:06:19 -- common/autotest_common.sh@829 -- # '[' -z 84523 ']' 00:15:07.998 08:06:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.998 08:06:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:07.998 08:06:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.998 08:06:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:07.998 08:06:19 -- common/autotest_common.sh@10 -- # set +x 00:15:07.998 [2024-12-07 08:06:19.172279] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:07.998 [2024-12-07 08:06:19.172368] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.257 [2024-12-07 08:06:19.309427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.257 [2024-12-07 08:06:19.383457] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:08.257 [2024-12-07 08:06:19.383629] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.257 [2024-12-07 08:06:19.383641] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.257 [2024-12-07 08:06:19.383649] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.257 [2024-12-07 08:06:19.383679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.195 08:06:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:09.195 08:06:20 -- common/autotest_common.sh@862 -- # return 0 00:15:09.195 08:06:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:09.195 08:06:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:09.195 08:06:20 -- common/autotest_common.sh@10 -- # set +x 00:15:09.195 08:06:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.195 08:06:20 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:09.195 [2024-12-07 08:06:20.448256] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:09.195 [2024-12-07 08:06:20.448539] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:09.195 [2024-12-07 08:06:20.448721] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:09.454 08:06:20 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:09.454 08:06:20 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev b1f46f49-56d8-48ec-8b5d-b7074569c12b 00:15:09.454 08:06:20 -- common/autotest_common.sh@897 -- # local bdev_name=b1f46f49-56d8-48ec-8b5d-b7074569c12b 00:15:09.454 08:06:20 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:09.454 08:06:20 -- common/autotest_common.sh@899 -- # local i 00:15:09.454 08:06:20 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:09.454 08:06:20 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:09.454 08:06:20 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:09.712 08:06:20 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b1f46f49-56d8-48ec-8b5d-b7074569c12b -t 2000 00:15:09.970 [ 00:15:09.970 { 00:15:09.970 "aliases": [ 00:15:09.970 "lvs/lvol" 00:15:09.970 ], 00:15:09.970 "assigned_rate_limits": { 00:15:09.970 "r_mbytes_per_sec": 0, 00:15:09.970 "rw_ios_per_sec": 0, 00:15:09.970 "rw_mbytes_per_sec": 0, 00:15:09.970 "w_mbytes_per_sec": 0 00:15:09.970 }, 00:15:09.970 "block_size": 4096, 00:15:09.970 "claimed": false, 00:15:09.970 "driver_specific": { 00:15:09.970 "lvol": { 00:15:09.970 "base_bdev": "aio_bdev", 00:15:09.970 "clone": false, 00:15:09.970 "esnap_clone": false, 00:15:09.970 "lvol_store_uuid": "9eddc3aa-b882-4670-bb4c-dea43ee50a9a", 00:15:09.970 "snapshot": false, 00:15:09.970 "thin_provision": false 00:15:09.970 } 00:15:09.970 }, 00:15:09.970 "name": "b1f46f49-56d8-48ec-8b5d-b7074569c12b", 00:15:09.970 "num_blocks": 38912, 00:15:09.970 "product_name": "Logical Volume", 00:15:09.970 "supported_io_types": { 00:15:09.970 "abort": false, 00:15:09.971 "compare": false, 00:15:09.971 "compare_and_write": false, 00:15:09.971 "flush": false, 00:15:09.971 "nvme_admin": false, 00:15:09.971 "nvme_io": false, 00:15:09.971 "read": true, 00:15:09.971 "reset": true, 00:15:09.971 "unmap": true, 00:15:09.971 "write": true, 00:15:09.971 "write_zeroes": true 00:15:09.971 }, 00:15:09.971 "uuid": "b1f46f49-56d8-48ec-8b5d-b7074569c12b", 00:15:09.971 "zoned": false 00:15:09.971 } 00:15:09.971 ] 00:15:09.971 08:06:21 -- common/autotest_common.sh@905 -- # return 0 00:15:09.971 08:06:21 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eddc3aa-b882-4670-bb4c-dea43ee50a9a 00:15:09.971 08:06:21 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:10.229 08:06:21 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:10.229 08:06:21 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eddc3aa-b882-4670-bb4c-dea43ee50a9a 00:15:10.229 08:06:21 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:10.488 08:06:21 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:10.488 08:06:21 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:10.488 [2024-12-07 08:06:21.709811] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:10.488 08:06:21 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eddc3aa-b882-4670-bb4c-dea43ee50a9a 00:15:10.488 08:06:21 -- common/autotest_common.sh@650 -- # local es=0 00:15:10.488 08:06:21 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eddc3aa-b882-4670-bb4c-dea43ee50a9a 00:15:10.488 08:06:21 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.488 08:06:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:10.488 08:06:21 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.488 08:06:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:10.488 08:06:21 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.488 08:06:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:10.488 08:06:21 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.488 08:06:21 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:10.488 08:06:21 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eddc3aa-b882-4670-bb4c-dea43ee50a9a 00:15:10.747 2024/12/07 08:06:22 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:9eddc3aa-b882-4670-bb4c-dea43ee50a9a], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:15:11.006 request: 00:15:11.006 { 00:15:11.006 "method": "bdev_lvol_get_lvstores", 00:15:11.006 "params": { 00:15:11.006 "uuid": "9eddc3aa-b882-4670-bb4c-dea43ee50a9a" 00:15:11.006 } 00:15:11.006 } 00:15:11.006 Got JSON-RPC error response 00:15:11.006 GoRPCClient: error on JSON-RPC call 00:15:11.006 08:06:22 -- common/autotest_common.sh@653 -- # es=1 00:15:11.006 08:06:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:11.006 08:06:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:11.006 08:06:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:11.006 08:06:22 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:11.264 aio_bdev 00:15:11.264 08:06:22 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev b1f46f49-56d8-48ec-8b5d-b7074569c12b 00:15:11.264 08:06:22 -- common/autotest_common.sh@897 -- # local bdev_name=b1f46f49-56d8-48ec-8b5d-b7074569c12b 00:15:11.264 08:06:22 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:11.264 08:06:22 -- common/autotest_common.sh@899 -- # local i 00:15:11.264 08:06:22 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:11.264 08:06:22 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:11.265 08:06:22 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:11.523 08:06:22 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b1f46f49-56d8-48ec-8b5d-b7074569c12b -t 2000 00:15:11.523 [ 00:15:11.523 { 00:15:11.523 "aliases": [ 00:15:11.523 "lvs/lvol" 00:15:11.523 ], 00:15:11.523 "assigned_rate_limits": { 00:15:11.523 "r_mbytes_per_sec": 0, 00:15:11.523 "rw_ios_per_sec": 0, 00:15:11.523 "rw_mbytes_per_sec": 0, 00:15:11.523 "w_mbytes_per_sec": 0 00:15:11.523 }, 00:15:11.523 "block_size": 4096, 00:15:11.523 "claimed": false, 00:15:11.523 "driver_specific": { 00:15:11.523 "lvol": { 00:15:11.523 "base_bdev": "aio_bdev", 00:15:11.523 "clone": false, 00:15:11.523 "esnap_clone": false, 00:15:11.523 "lvol_store_uuid": "9eddc3aa-b882-4670-bb4c-dea43ee50a9a", 00:15:11.523 "snapshot": false, 00:15:11.523 "thin_provision": false 00:15:11.523 } 00:15:11.523 }, 00:15:11.523 "name": "b1f46f49-56d8-48ec-8b5d-b7074569c12b", 00:15:11.523 "num_blocks": 38912, 00:15:11.523 "product_name": "Logical Volume", 00:15:11.523 "supported_io_types": { 00:15:11.523 "abort": false, 00:15:11.523 "compare": false, 00:15:11.523 "compare_and_write": false, 00:15:11.523 "flush": false, 00:15:11.523 "nvme_admin": false, 00:15:11.523 "nvme_io": false, 00:15:11.523 "read": true, 00:15:11.523 "reset": true, 00:15:11.523 "unmap": true, 00:15:11.523 "write": true, 00:15:11.523 "write_zeroes": true 00:15:11.523 }, 00:15:11.523 "uuid": "b1f46f49-56d8-48ec-8b5d-b7074569c12b", 00:15:11.523 "zoned": false 00:15:11.523 } 00:15:11.523 ] 00:15:11.523 08:06:22 -- common/autotest_common.sh@905 -- # return 0 00:15:11.523 08:06:22 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eddc3aa-b882-4670-bb4c-dea43ee50a9a 00:15:11.523 08:06:22 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:12.091 08:06:23 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:12.091 08:06:23 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9eddc3aa-b882-4670-bb4c-dea43ee50a9a 00:15:12.091 08:06:23 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:12.091 08:06:23 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:12.091 08:06:23 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b1f46f49-56d8-48ec-8b5d-b7074569c12b 00:15:12.658 08:06:23 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9eddc3aa-b882-4670-bb4c-dea43ee50a9a 00:15:12.658 08:06:23 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:12.917 08:06:24 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:13.483 ************************************ 00:15:13.483 END TEST lvs_grow_dirty 00:15:13.483 ************************************ 00:15:13.483 00:15:13.483 real 0m20.322s 00:15:13.483 user 0m40.649s 00:15:13.483 sys 0m8.653s 00:15:13.483 08:06:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:13.483 08:06:24 -- common/autotest_common.sh@10 -- # set +x 00:15:13.483 08:06:24 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:13.483 08:06:24 -- common/autotest_common.sh@806 -- # type=--id 00:15:13.483 08:06:24 -- common/autotest_common.sh@807 -- # id=0 00:15:13.483 08:06:24 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:13.483 08:06:24 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:13.483 08:06:24 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:13.483 08:06:24 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:13.483 08:06:24 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:13.483 08:06:24 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:13.483 nvmf_trace.0 00:15:13.483 08:06:24 -- common/autotest_common.sh@821 -- # return 0 00:15:13.483 08:06:24 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:13.483 08:06:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:13.483 08:06:24 -- nvmf/common.sh@116 -- # sync 00:15:14.050 08:06:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:14.050 08:06:25 -- nvmf/common.sh@119 -- # set +e 00:15:14.050 08:06:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:14.050 08:06:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:14.050 rmmod nvme_tcp 00:15:14.050 rmmod nvme_fabrics 00:15:14.050 rmmod nvme_keyring 00:15:14.308 08:06:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:14.308 08:06:25 -- nvmf/common.sh@123 -- # set -e 00:15:14.308 08:06:25 -- nvmf/common.sh@124 -- # return 0 00:15:14.308 08:06:25 -- nvmf/common.sh@477 -- # '[' -n 84523 ']' 00:15:14.308 08:06:25 -- nvmf/common.sh@478 -- # killprocess 84523 00:15:14.308 08:06:25 -- common/autotest_common.sh@936 -- # '[' -z 84523 ']' 00:15:14.308 08:06:25 -- common/autotest_common.sh@940 -- # kill -0 84523 00:15:14.308 08:06:25 -- common/autotest_common.sh@941 -- # uname 00:15:14.308 08:06:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:14.308 08:06:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84523 00:15:14.308 08:06:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:14.308 08:06:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:14.308 killing process with pid 84523 00:15:14.308 08:06:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84523' 00:15:14.308 08:06:25 -- common/autotest_common.sh@955 -- # kill 84523 00:15:14.308 08:06:25 -- common/autotest_common.sh@960 -- # wait 84523 00:15:14.308 08:06:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:14.308 08:06:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:14.308 08:06:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:14.308 08:06:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:14.308 08:06:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:14.308 08:06:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.308 08:06:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.308 08:06:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.566 08:06:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:14.566 ************************************ 00:15:14.566 END TEST nvmf_lvs_grow 00:15:14.566 ************************************ 00:15:14.566 00:15:14.566 real 0m41.081s 00:15:14.566 user 1m4.809s 00:15:14.566 sys 0m12.045s 00:15:14.566 08:06:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:14.566 08:06:25 -- common/autotest_common.sh@10 -- # set +x 00:15:14.567 08:06:25 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:14.567 08:06:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:14.567 08:06:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:14.567 08:06:25 -- common/autotest_common.sh@10 -- # set +x 00:15:14.567 ************************************ 00:15:14.567 START TEST nvmf_bdev_io_wait 00:15:14.567 ************************************ 00:15:14.567 08:06:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:14.567 * Looking for test storage... 00:15:14.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:14.567 08:06:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:14.567 08:06:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:14.567 08:06:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:14.567 08:06:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:14.567 08:06:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:14.567 08:06:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:14.567 08:06:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:14.567 08:06:25 -- scripts/common.sh@335 -- # IFS=.-: 00:15:14.567 08:06:25 -- scripts/common.sh@335 -- # read -ra ver1 00:15:14.567 08:06:25 -- scripts/common.sh@336 -- # IFS=.-: 00:15:14.567 08:06:25 -- scripts/common.sh@336 -- # read -ra ver2 00:15:14.567 08:06:25 -- scripts/common.sh@337 -- # local 'op=<' 00:15:14.567 08:06:25 -- scripts/common.sh@339 -- # ver1_l=2 00:15:14.567 08:06:25 -- scripts/common.sh@340 -- # ver2_l=1 00:15:14.567 08:06:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:14.567 08:06:25 -- scripts/common.sh@343 -- # case "$op" in 00:15:14.567 08:06:25 -- scripts/common.sh@344 -- # : 1 00:15:14.567 08:06:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:14.567 08:06:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.567 08:06:25 -- scripts/common.sh@364 -- # decimal 1 00:15:14.567 08:06:25 -- scripts/common.sh@352 -- # local d=1 00:15:14.567 08:06:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:14.567 08:06:25 -- scripts/common.sh@354 -- # echo 1 00:15:14.567 08:06:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:14.567 08:06:25 -- scripts/common.sh@365 -- # decimal 2 00:15:14.567 08:06:25 -- scripts/common.sh@352 -- # local d=2 00:15:14.567 08:06:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:14.567 08:06:25 -- scripts/common.sh@354 -- # echo 2 00:15:14.567 08:06:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:14.567 08:06:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:14.567 08:06:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:14.567 08:06:25 -- scripts/common.sh@367 -- # return 0 00:15:14.567 08:06:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:14.567 08:06:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:14.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.567 --rc genhtml_branch_coverage=1 00:15:14.567 --rc genhtml_function_coverage=1 00:15:14.567 --rc genhtml_legend=1 00:15:14.567 --rc geninfo_all_blocks=1 00:15:14.567 --rc geninfo_unexecuted_blocks=1 00:15:14.567 00:15:14.567 ' 00:15:14.567 08:06:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:14.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.567 --rc genhtml_branch_coverage=1 00:15:14.567 --rc genhtml_function_coverage=1 00:15:14.567 --rc genhtml_legend=1 00:15:14.567 --rc geninfo_all_blocks=1 00:15:14.567 --rc geninfo_unexecuted_blocks=1 00:15:14.567 00:15:14.567 ' 00:15:14.567 08:06:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:14.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.567 --rc genhtml_branch_coverage=1 00:15:14.567 --rc genhtml_function_coverage=1 00:15:14.567 --rc genhtml_legend=1 00:15:14.567 --rc geninfo_all_blocks=1 00:15:14.567 --rc geninfo_unexecuted_blocks=1 00:15:14.567 00:15:14.567 ' 00:15:14.567 08:06:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:14.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.567 --rc genhtml_branch_coverage=1 00:15:14.567 --rc genhtml_function_coverage=1 00:15:14.567 --rc genhtml_legend=1 00:15:14.567 --rc geninfo_all_blocks=1 00:15:14.567 --rc geninfo_unexecuted_blocks=1 00:15:14.567 00:15:14.567 ' 00:15:14.567 08:06:25 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:14.567 08:06:25 -- nvmf/common.sh@7 -- # uname -s 00:15:14.567 08:06:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.567 08:06:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.567 08:06:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.567 08:06:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.567 08:06:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.567 08:06:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.567 08:06:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.567 08:06:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.567 08:06:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.567 08:06:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.825 08:06:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:15:14.825 08:06:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:15:14.825 08:06:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.825 08:06:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.825 08:06:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:14.825 08:06:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:14.825 08:06:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.825 08:06:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.825 08:06:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.825 08:06:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.825 08:06:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.825 08:06:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.825 08:06:25 -- paths/export.sh@5 -- # export PATH 00:15:14.825 08:06:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.825 08:06:25 -- nvmf/common.sh@46 -- # : 0 00:15:14.825 08:06:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:14.825 08:06:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:14.825 08:06:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:14.825 08:06:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.825 08:06:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.825 08:06:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:14.825 08:06:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:14.825 08:06:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:14.825 08:06:25 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:14.825 08:06:25 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:14.825 08:06:25 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:14.825 08:06:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:14.825 08:06:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:14.825 08:06:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:14.825 08:06:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:14.825 08:06:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:14.825 08:06:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.825 08:06:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.825 08:06:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.825 08:06:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:14.825 08:06:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:14.825 08:06:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:14.825 08:06:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:14.825 08:06:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:14.825 08:06:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:14.825 08:06:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:14.825 08:06:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:14.825 08:06:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:14.825 08:06:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:14.825 08:06:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:14.825 08:06:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:14.825 08:06:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:14.825 08:06:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:14.825 08:06:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:14.825 08:06:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:14.825 08:06:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:14.825 08:06:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:14.825 08:06:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:14.825 08:06:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:14.825 Cannot find device "nvmf_tgt_br" 00:15:14.825 08:06:25 -- nvmf/common.sh@154 -- # true 00:15:14.825 08:06:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:14.825 Cannot find device "nvmf_tgt_br2" 00:15:14.825 08:06:25 -- nvmf/common.sh@155 -- # true 00:15:14.825 08:06:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:14.825 08:06:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:14.825 Cannot find device "nvmf_tgt_br" 00:15:14.825 08:06:25 -- nvmf/common.sh@157 -- # true 00:15:14.825 08:06:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:14.825 Cannot find device "nvmf_tgt_br2" 00:15:14.825 08:06:25 -- nvmf/common.sh@158 -- # true 00:15:14.825 08:06:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:14.825 08:06:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:14.825 08:06:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:14.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:14.825 08:06:25 -- nvmf/common.sh@161 -- # true 00:15:14.825 08:06:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:14.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:14.825 08:06:26 -- nvmf/common.sh@162 -- # true 00:15:14.825 08:06:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:14.825 08:06:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:14.825 08:06:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:14.825 08:06:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:14.825 08:06:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:14.825 08:06:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:14.825 08:06:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:14.825 08:06:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:14.826 08:06:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:14.826 08:06:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:14.826 08:06:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:15.084 08:06:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:15.084 08:06:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:15.084 08:06:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:15.084 08:06:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:15.084 08:06:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:15.084 08:06:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:15.084 08:06:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:15.084 08:06:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:15.084 08:06:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:15.084 08:06:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:15.084 08:06:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:15.084 08:06:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:15.084 08:06:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:15.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:15:15.084 00:15:15.084 --- 10.0.0.2 ping statistics --- 00:15:15.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.084 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:15.084 08:06:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:15.084 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:15.084 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:15:15.084 00:15:15.084 --- 10.0.0.3 ping statistics --- 00:15:15.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.084 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:15:15.084 08:06:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:15.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:15.084 00:15:15.084 --- 10.0.0.1 ping statistics --- 00:15:15.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.084 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:15.084 08:06:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.084 08:06:26 -- nvmf/common.sh@421 -- # return 0 00:15:15.084 08:06:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:15.084 08:06:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.084 08:06:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:15.084 08:06:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:15.084 08:06:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.084 08:06:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:15.084 08:06:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:15.084 08:06:26 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:15.084 08:06:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:15.084 08:06:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:15.084 08:06:26 -- common/autotest_common.sh@10 -- # set +x 00:15:15.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.084 08:06:26 -- nvmf/common.sh@469 -- # nvmfpid=84955 00:15:15.084 08:06:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:15.084 08:06:26 -- nvmf/common.sh@470 -- # waitforlisten 84955 00:15:15.084 08:06:26 -- common/autotest_common.sh@829 -- # '[' -z 84955 ']' 00:15:15.084 08:06:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.084 08:06:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:15.084 08:06:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.084 08:06:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:15.084 08:06:26 -- common/autotest_common.sh@10 -- # set +x 00:15:15.084 [2024-12-07 08:06:26.282090] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:15.084 [2024-12-07 08:06:26.282184] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.343 [2024-12-07 08:06:26.424114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:15.343 [2024-12-07 08:06:26.492795] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:15.343 [2024-12-07 08:06:26.492937] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.343 [2024-12-07 08:06:26.492950] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.343 [2024-12-07 08:06:26.492958] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.343 [2024-12-07 08:06:26.493022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.343 [2024-12-07 08:06:26.493373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.343 [2024-12-07 08:06:26.494045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:15.343 [2024-12-07 08:06:26.494092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.343 08:06:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:15.343 08:06:26 -- common/autotest_common.sh@862 -- # return 0 00:15:15.343 08:06:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:15.343 08:06:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:15.343 08:06:26 -- common/autotest_common.sh@10 -- # set +x 00:15:15.343 08:06:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.343 08:06:26 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:15.343 08:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.343 08:06:26 -- common/autotest_common.sh@10 -- # set +x 00:15:15.343 08:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.343 08:06:26 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:15.343 08:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.343 08:06:26 -- common/autotest_common.sh@10 -- # set +x 00:15:15.657 08:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.657 08:06:26 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:15.657 08:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.657 08:06:26 -- common/autotest_common.sh@10 -- # set +x 00:15:15.657 [2024-12-07 08:06:26.672032] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.657 08:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.657 08:06:26 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:15.657 08:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.657 08:06:26 -- common/autotest_common.sh@10 -- # set +x 00:15:15.657 Malloc0 00:15:15.657 08:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.657 08:06:26 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:15.657 08:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.657 08:06:26 -- common/autotest_common.sh@10 -- # set +x 00:15:15.657 08:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.657 08:06:26 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:15.657 08:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.657 08:06:26 -- common/autotest_common.sh@10 -- # set +x 00:15:15.657 08:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.657 08:06:26 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.657 08:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.657 08:06:26 -- common/autotest_common.sh@10 -- # set +x 00:15:15.657 [2024-12-07 08:06:26.736240] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.657 08:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.658 08:06:26 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=85000 00:15:15.658 08:06:26 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:15.658 08:06:26 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:15.658 08:06:26 -- target/bdev_io_wait.sh@30 -- # READ_PID=85002 00:15:15.658 08:06:26 -- nvmf/common.sh@520 -- # config=() 00:15:15.658 08:06:26 -- nvmf/common.sh@520 -- # local subsystem config 00:15:15.658 08:06:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:15.658 08:06:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:15.658 { 00:15:15.658 "params": { 00:15:15.658 "name": "Nvme$subsystem", 00:15:15.658 "trtype": "$TEST_TRANSPORT", 00:15:15.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:15.658 "adrfam": "ipv4", 00:15:15.658 "trsvcid": "$NVMF_PORT", 00:15:15.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:15.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:15.658 "hdgst": ${hdgst:-false}, 00:15:15.658 "ddgst": ${ddgst:-false} 00:15:15.658 }, 00:15:15.658 "method": "bdev_nvme_attach_controller" 00:15:15.658 } 00:15:15.658 EOF 00:15:15.658 )") 00:15:15.658 08:06:26 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:15.658 08:06:26 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:15.658 08:06:26 -- nvmf/common.sh@520 -- # config=() 00:15:15.658 08:06:26 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=85004 00:15:15.658 08:06:26 -- nvmf/common.sh@520 -- # local subsystem config 00:15:15.658 08:06:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:15.658 08:06:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:15.658 { 00:15:15.658 "params": { 00:15:15.658 "name": "Nvme$subsystem", 00:15:15.658 "trtype": "$TEST_TRANSPORT", 00:15:15.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:15.658 "adrfam": "ipv4", 00:15:15.658 "trsvcid": "$NVMF_PORT", 00:15:15.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:15.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:15.658 "hdgst": ${hdgst:-false}, 00:15:15.658 "ddgst": ${ddgst:-false} 00:15:15.658 }, 00:15:15.658 "method": "bdev_nvme_attach_controller" 00:15:15.658 } 00:15:15.658 EOF 00:15:15.658 )") 00:15:15.658 08:06:26 -- nvmf/common.sh@542 -- # cat 00:15:15.658 08:06:26 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=85006 00:15:15.658 08:06:26 -- target/bdev_io_wait.sh@35 -- # sync 00:15:15.658 08:06:26 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:15.658 08:06:26 -- nvmf/common.sh@542 -- # cat 00:15:15.658 08:06:26 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:15.658 08:06:26 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:15.658 08:06:26 -- nvmf/common.sh@520 -- # config=() 00:15:15.658 08:06:26 -- nvmf/common.sh@520 -- # local subsystem config 00:15:15.658 08:06:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:15.658 08:06:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:15.658 { 00:15:15.658 "params": { 00:15:15.658 "name": "Nvme$subsystem", 00:15:15.658 "trtype": "$TEST_TRANSPORT", 00:15:15.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:15.658 "adrfam": "ipv4", 00:15:15.658 "trsvcid": "$NVMF_PORT", 00:15:15.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:15.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:15.658 "hdgst": ${hdgst:-false}, 00:15:15.658 "ddgst": ${ddgst:-false} 00:15:15.658 }, 00:15:15.658 "method": "bdev_nvme_attach_controller" 00:15:15.658 } 00:15:15.658 EOF 00:15:15.658 )") 00:15:15.658 08:06:26 -- nvmf/common.sh@544 -- # jq . 00:15:15.658 08:06:26 -- nvmf/common.sh@544 -- # jq . 00:15:15.658 08:06:26 -- nvmf/common.sh@542 -- # cat 00:15:15.658 08:06:26 -- nvmf/common.sh@545 -- # IFS=, 00:15:15.658 08:06:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:15.658 "params": { 00:15:15.658 "name": "Nvme1", 00:15:15.658 "trtype": "tcp", 00:15:15.658 "traddr": "10.0.0.2", 00:15:15.658 "adrfam": "ipv4", 00:15:15.658 "trsvcid": "4420", 00:15:15.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:15.658 "hdgst": false, 00:15:15.658 "ddgst": false 00:15:15.658 }, 00:15:15.658 "method": "bdev_nvme_attach_controller" 00:15:15.658 }' 00:15:15.658 08:06:26 -- nvmf/common.sh@544 -- # jq . 00:15:15.658 08:06:26 -- nvmf/common.sh@545 -- # IFS=, 00:15:15.658 08:06:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:15.658 "params": { 00:15:15.658 "name": "Nvme1", 00:15:15.658 "trtype": "tcp", 00:15:15.658 "traddr": "10.0.0.2", 00:15:15.658 "adrfam": "ipv4", 00:15:15.658 "trsvcid": "4420", 00:15:15.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:15.658 "hdgst": false, 00:15:15.658 "ddgst": false 00:15:15.658 }, 00:15:15.658 "method": "bdev_nvme_attach_controller" 00:15:15.658 }' 00:15:15.658 08:06:26 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:15.658 08:06:26 -- nvmf/common.sh@520 -- # config=() 00:15:15.658 08:06:26 -- nvmf/common.sh@520 -- # local subsystem config 00:15:15.658 08:06:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:15.658 08:06:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:15.658 { 00:15:15.658 "params": { 00:15:15.658 "name": "Nvme$subsystem", 00:15:15.658 "trtype": "$TEST_TRANSPORT", 00:15:15.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:15.658 "adrfam": "ipv4", 00:15:15.658 "trsvcid": "$NVMF_PORT", 00:15:15.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:15.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:15.658 "hdgst": ${hdgst:-false}, 00:15:15.658 "ddgst": ${ddgst:-false} 00:15:15.658 }, 00:15:15.658 "method": "bdev_nvme_attach_controller" 00:15:15.658 } 00:15:15.658 EOF 00:15:15.658 )") 00:15:15.658 08:06:26 -- nvmf/common.sh@545 -- # IFS=, 00:15:15.658 08:06:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:15.658 "params": { 00:15:15.658 "name": "Nvme1", 00:15:15.658 "trtype": "tcp", 00:15:15.658 "traddr": "10.0.0.2", 00:15:15.658 "adrfam": "ipv4", 00:15:15.658 "trsvcid": "4420", 00:15:15.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:15.658 "hdgst": false, 00:15:15.658 "ddgst": false 00:15:15.658 }, 00:15:15.658 "method": "bdev_nvme_attach_controller" 00:15:15.658 }' 00:15:15.658 08:06:26 -- nvmf/common.sh@542 -- # cat 00:15:15.658 08:06:26 -- nvmf/common.sh@544 -- # jq . 00:15:15.658 08:06:26 -- nvmf/common.sh@545 -- # IFS=, 00:15:15.658 08:06:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:15.658 "params": { 00:15:15.658 "name": "Nvme1", 00:15:15.658 "trtype": "tcp", 00:15:15.658 "traddr": "10.0.0.2", 00:15:15.658 "adrfam": "ipv4", 00:15:15.658 "trsvcid": "4420", 00:15:15.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:15.658 "hdgst": false, 00:15:15.658 "ddgst": false 00:15:15.658 }, 00:15:15.658 "method": "bdev_nvme_attach_controller" 00:15:15.658 }' 00:15:15.658 [2024-12-07 08:06:26.798158] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:15.658 [2024-12-07 08:06:26.798261] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:15.658 [2024-12-07 08:06:26.811114] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:15.658 [2024-12-07 08:06:26.811209] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:15.658 08:06:26 -- target/bdev_io_wait.sh@37 -- # wait 85000 00:15:15.658 [2024-12-07 08:06:26.825308] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:15.658 [2024-12-07 08:06:26.825401] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:15.658 [2024-12-07 08:06:26.826295] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:15.658 [2024-12-07 08:06:26.826377] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:15.935 [2024-12-07 08:06:27.005707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.935 [2024-12-07 08:06:27.074364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:15.935 [2024-12-07 08:06:27.082243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.935 [2024-12-07 08:06:27.151993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:15.935 [2024-12-07 08:06:27.162178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.194 [2024-12-07 08:06:27.229342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:16.194 [2024-12-07 08:06:27.240859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.194 Running I/O for 1 seconds... 00:15:16.194 Running I/O for 1 seconds... 00:15:16.194 [2024-12-07 08:06:27.308968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:16.194 Running I/O for 1 seconds... 00:15:16.194 Running I/O for 1 seconds... 00:15:17.130 00:15:17.130 Latency(us) 00:15:17.130 [2024-12-07T08:06:28.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.130 [2024-12-07T08:06:28.406Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:17.130 Nvme1n1 : 1.02 5133.02 20.05 0.00 0.00 24636.67 10962.39 36700.16 00:15:17.130 [2024-12-07T08:06:28.406Z] =================================================================================================================== 00:15:17.130 [2024-12-07T08:06:28.406Z] Total : 5133.02 20.05 0.00 0.00 24636.67 10962.39 36700.16 00:15:17.130 00:15:17.130 Latency(us) 00:15:17.130 [2024-12-07T08:06:28.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.130 [2024-12-07T08:06:28.406Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:17.130 Nvme1n1 : 1.01 11503.96 44.94 0.00 0.00 11083.01 6821.70 20733.21 00:15:17.130 [2024-12-07T08:06:28.406Z] =================================================================================================================== 00:15:17.130 [2024-12-07T08:06:28.406Z] Total : 11503.96 44.94 0.00 0.00 11083.01 6821.70 20733.21 00:15:17.130 00:15:17.130 Latency(us) 00:15:17.130 [2024-12-07T08:06:28.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.130 [2024-12-07T08:06:28.406Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:17.130 Nvme1n1 : 1.01 5326.77 20.81 0.00 0.00 23952.69 6017.40 51237.24 00:15:17.130 [2024-12-07T08:06:28.406Z] =================================================================================================================== 00:15:17.130 [2024-12-07T08:06:28.406Z] Total : 5326.77 20.81 0.00 0.00 23952.69 6017.40 51237.24 00:15:17.389 00:15:17.389 Latency(us) 00:15:17.389 [2024-12-07T08:06:28.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.389 [2024-12-07T08:06:28.665Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:17.390 Nvme1n1 : 1.00 199469.98 779.18 0.00 0.00 638.92 269.96 1102.20 00:15:17.390 [2024-12-07T08:06:28.666Z] =================================================================================================================== 00:15:17.390 [2024-12-07T08:06:28.666Z] Total : 199469.98 779.18 0.00 0.00 638.92 269.96 1102.20 00:15:17.390 08:06:28 -- target/bdev_io_wait.sh@38 -- # wait 85002 00:15:17.390 08:06:28 -- target/bdev_io_wait.sh@39 -- # wait 85004 00:15:17.648 08:06:28 -- target/bdev_io_wait.sh@40 -- # wait 85006 00:15:17.648 08:06:28 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.649 08:06:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.649 08:06:28 -- common/autotest_common.sh@10 -- # set +x 00:15:17.649 08:06:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.649 08:06:28 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:17.649 08:06:28 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:17.649 08:06:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:17.649 08:06:28 -- nvmf/common.sh@116 -- # sync 00:15:17.649 08:06:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:17.649 08:06:28 -- nvmf/common.sh@119 -- # set +e 00:15:17.649 08:06:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:17.649 08:06:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:17.649 rmmod nvme_tcp 00:15:17.649 rmmod nvme_fabrics 00:15:17.649 rmmod nvme_keyring 00:15:17.649 08:06:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:17.649 08:06:28 -- nvmf/common.sh@123 -- # set -e 00:15:17.649 08:06:28 -- nvmf/common.sh@124 -- # return 0 00:15:17.649 08:06:28 -- nvmf/common.sh@477 -- # '[' -n 84955 ']' 00:15:17.649 08:06:28 -- nvmf/common.sh@478 -- # killprocess 84955 00:15:17.649 08:06:28 -- common/autotest_common.sh@936 -- # '[' -z 84955 ']' 00:15:17.649 08:06:28 -- common/autotest_common.sh@940 -- # kill -0 84955 00:15:17.649 08:06:28 -- common/autotest_common.sh@941 -- # uname 00:15:17.649 08:06:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:17.649 08:06:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84955 00:15:17.908 08:06:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:17.908 08:06:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:17.908 killing process with pid 84955 00:15:17.908 08:06:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84955' 00:15:17.908 08:06:28 -- common/autotest_common.sh@955 -- # kill 84955 00:15:17.908 08:06:28 -- common/autotest_common.sh@960 -- # wait 84955 00:15:17.908 08:06:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:17.908 08:06:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:17.908 08:06:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:17.908 08:06:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:17.908 08:06:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:17.908 08:06:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.908 08:06:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.908 08:06:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.908 08:06:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:17.908 00:15:17.908 real 0m3.516s 00:15:17.908 user 0m15.422s 00:15:17.908 sys 0m1.941s 00:15:17.908 08:06:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:17.908 ************************************ 00:15:17.908 END TEST nvmf_bdev_io_wait 00:15:17.908 ************************************ 00:15:17.908 08:06:29 -- common/autotest_common.sh@10 -- # set +x 00:15:18.169 08:06:29 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:18.169 08:06:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:18.169 08:06:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:18.169 08:06:29 -- common/autotest_common.sh@10 -- # set +x 00:15:18.169 ************************************ 00:15:18.169 START TEST nvmf_queue_depth 00:15:18.169 ************************************ 00:15:18.169 08:06:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:18.169 * Looking for test storage... 00:15:18.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:18.169 08:06:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:18.169 08:06:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:18.169 08:06:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:18.169 08:06:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:18.169 08:06:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:18.169 08:06:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:18.169 08:06:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:18.169 08:06:29 -- scripts/common.sh@335 -- # IFS=.-: 00:15:18.169 08:06:29 -- scripts/common.sh@335 -- # read -ra ver1 00:15:18.169 08:06:29 -- scripts/common.sh@336 -- # IFS=.-: 00:15:18.169 08:06:29 -- scripts/common.sh@336 -- # read -ra ver2 00:15:18.169 08:06:29 -- scripts/common.sh@337 -- # local 'op=<' 00:15:18.169 08:06:29 -- scripts/common.sh@339 -- # ver1_l=2 00:15:18.169 08:06:29 -- scripts/common.sh@340 -- # ver2_l=1 00:15:18.169 08:06:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:18.169 08:06:29 -- scripts/common.sh@343 -- # case "$op" in 00:15:18.169 08:06:29 -- scripts/common.sh@344 -- # : 1 00:15:18.169 08:06:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:18.169 08:06:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:18.169 08:06:29 -- scripts/common.sh@364 -- # decimal 1 00:15:18.169 08:06:29 -- scripts/common.sh@352 -- # local d=1 00:15:18.169 08:06:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:18.169 08:06:29 -- scripts/common.sh@354 -- # echo 1 00:15:18.169 08:06:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:18.169 08:06:29 -- scripts/common.sh@365 -- # decimal 2 00:15:18.169 08:06:29 -- scripts/common.sh@352 -- # local d=2 00:15:18.169 08:06:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:18.169 08:06:29 -- scripts/common.sh@354 -- # echo 2 00:15:18.169 08:06:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:18.169 08:06:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:18.169 08:06:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:18.169 08:06:29 -- scripts/common.sh@367 -- # return 0 00:15:18.169 08:06:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:18.169 08:06:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:18.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.169 --rc genhtml_branch_coverage=1 00:15:18.169 --rc genhtml_function_coverage=1 00:15:18.169 --rc genhtml_legend=1 00:15:18.169 --rc geninfo_all_blocks=1 00:15:18.169 --rc geninfo_unexecuted_blocks=1 00:15:18.169 00:15:18.169 ' 00:15:18.169 08:06:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:18.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.169 --rc genhtml_branch_coverage=1 00:15:18.169 --rc genhtml_function_coverage=1 00:15:18.169 --rc genhtml_legend=1 00:15:18.169 --rc geninfo_all_blocks=1 00:15:18.169 --rc geninfo_unexecuted_blocks=1 00:15:18.169 00:15:18.169 ' 00:15:18.169 08:06:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:18.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.169 --rc genhtml_branch_coverage=1 00:15:18.169 --rc genhtml_function_coverage=1 00:15:18.169 --rc genhtml_legend=1 00:15:18.169 --rc geninfo_all_blocks=1 00:15:18.169 --rc geninfo_unexecuted_blocks=1 00:15:18.169 00:15:18.169 ' 00:15:18.169 08:06:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:18.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.169 --rc genhtml_branch_coverage=1 00:15:18.169 --rc genhtml_function_coverage=1 00:15:18.169 --rc genhtml_legend=1 00:15:18.169 --rc geninfo_all_blocks=1 00:15:18.169 --rc geninfo_unexecuted_blocks=1 00:15:18.169 00:15:18.169 ' 00:15:18.169 08:06:29 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:18.169 08:06:29 -- nvmf/common.sh@7 -- # uname -s 00:15:18.169 08:06:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.169 08:06:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.169 08:06:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.169 08:06:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.169 08:06:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:18.169 08:06:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:18.169 08:06:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.169 08:06:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:18.169 08:06:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.169 08:06:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:18.169 08:06:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:15:18.169 08:06:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:15:18.169 08:06:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.169 08:06:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:18.169 08:06:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:18.169 08:06:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:18.169 08:06:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.169 08:06:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.169 08:06:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.169 08:06:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.169 08:06:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.169 08:06:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.169 08:06:29 -- paths/export.sh@5 -- # export PATH 00:15:18.170 08:06:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.170 08:06:29 -- nvmf/common.sh@46 -- # : 0 00:15:18.170 08:06:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:18.170 08:06:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:18.170 08:06:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:18.170 08:06:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.170 08:06:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.170 08:06:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:18.170 08:06:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:18.170 08:06:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:18.170 08:06:29 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:18.170 08:06:29 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:18.170 08:06:29 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:18.170 08:06:29 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:18.170 08:06:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:18.170 08:06:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:18.170 08:06:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:18.170 08:06:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:18.170 08:06:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:18.170 08:06:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.170 08:06:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.170 08:06:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.170 08:06:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:18.170 08:06:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:18.170 08:06:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:18.170 08:06:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:18.170 08:06:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:18.170 08:06:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:18.170 08:06:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:18.170 08:06:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:18.170 08:06:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:18.170 08:06:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:18.170 08:06:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:18.170 08:06:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:18.170 08:06:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:18.170 08:06:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:18.170 08:06:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:18.170 08:06:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:18.170 08:06:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:18.170 08:06:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:18.170 08:06:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:18.170 08:06:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:18.429 Cannot find device "nvmf_tgt_br" 00:15:18.429 08:06:29 -- nvmf/common.sh@154 -- # true 00:15:18.429 08:06:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:18.429 Cannot find device "nvmf_tgt_br2" 00:15:18.429 08:06:29 -- nvmf/common.sh@155 -- # true 00:15:18.429 08:06:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:18.429 08:06:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:18.429 Cannot find device "nvmf_tgt_br" 00:15:18.429 08:06:29 -- nvmf/common.sh@157 -- # true 00:15:18.429 08:06:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:18.429 Cannot find device "nvmf_tgt_br2" 00:15:18.429 08:06:29 -- nvmf/common.sh@158 -- # true 00:15:18.429 08:06:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:18.429 08:06:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:18.429 08:06:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:18.429 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:18.429 08:06:29 -- nvmf/common.sh@161 -- # true 00:15:18.429 08:06:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:18.429 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:18.429 08:06:29 -- nvmf/common.sh@162 -- # true 00:15:18.429 08:06:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:18.429 08:06:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:18.429 08:06:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:18.429 08:06:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:18.429 08:06:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:18.429 08:06:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:18.429 08:06:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:18.429 08:06:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:18.429 08:06:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:18.429 08:06:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:18.429 08:06:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:18.429 08:06:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:18.429 08:06:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:18.429 08:06:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:18.429 08:06:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:18.429 08:06:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:18.429 08:06:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:18.429 08:06:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:18.429 08:06:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:18.689 08:06:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:18.689 08:06:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:18.689 08:06:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:18.689 08:06:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:18.689 08:06:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:18.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:18.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:15:18.689 00:15:18.689 --- 10.0.0.2 ping statistics --- 00:15:18.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.689 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:18.689 08:06:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:18.689 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:18.689 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:15:18.689 00:15:18.689 --- 10.0.0.3 ping statistics --- 00:15:18.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.689 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:18.689 08:06:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:18.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:18.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:15:18.689 00:15:18.689 --- 10.0.0.1 ping statistics --- 00:15:18.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.689 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:15:18.689 08:06:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:18.689 08:06:29 -- nvmf/common.sh@421 -- # return 0 00:15:18.689 08:06:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:18.689 08:06:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:18.689 08:06:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:18.689 08:06:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:18.689 08:06:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:18.689 08:06:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:18.689 08:06:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:18.689 08:06:29 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:18.689 08:06:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:18.689 08:06:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:18.689 08:06:29 -- common/autotest_common.sh@10 -- # set +x 00:15:18.689 08:06:29 -- nvmf/common.sh@469 -- # nvmfpid=85224 00:15:18.689 08:06:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:18.689 08:06:29 -- nvmf/common.sh@470 -- # waitforlisten 85224 00:15:18.689 08:06:29 -- common/autotest_common.sh@829 -- # '[' -z 85224 ']' 00:15:18.689 08:06:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.689 08:06:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:18.689 08:06:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.689 08:06:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:18.689 08:06:29 -- common/autotest_common.sh@10 -- # set +x 00:15:18.689 [2024-12-07 08:06:29.842087] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:18.689 [2024-12-07 08:06:29.842178] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.948 [2024-12-07 08:06:29.983542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.948 [2024-12-07 08:06:30.072943] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:18.948 [2024-12-07 08:06:30.073144] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.948 [2024-12-07 08:06:30.073166] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.948 [2024-12-07 08:06:30.073180] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.948 [2024-12-07 08:06:30.073279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.516 08:06:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.516 08:06:30 -- common/autotest_common.sh@862 -- # return 0 00:15:19.516 08:06:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:19.516 08:06:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:19.516 08:06:30 -- common/autotest_common.sh@10 -- # set +x 00:15:19.784 08:06:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.784 08:06:30 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:19.784 08:06:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.784 08:06:30 -- common/autotest_common.sh@10 -- # set +x 00:15:19.784 [2024-12-07 08:06:30.840569] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.784 08:06:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.784 08:06:30 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:19.784 08:06:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.784 08:06:30 -- common/autotest_common.sh@10 -- # set +x 00:15:19.784 Malloc0 00:15:19.784 08:06:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.784 08:06:30 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:19.784 08:06:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.784 08:06:30 -- common/autotest_common.sh@10 -- # set +x 00:15:19.784 08:06:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.784 08:06:30 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:19.784 08:06:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.784 08:06:30 -- common/autotest_common.sh@10 -- # set +x 00:15:19.784 08:06:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.784 08:06:30 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.784 08:06:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.784 08:06:30 -- common/autotest_common.sh@10 -- # set +x 00:15:19.784 [2024-12-07 08:06:30.899997] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.784 08:06:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.784 08:06:30 -- target/queue_depth.sh@30 -- # bdevperf_pid=85274 00:15:19.784 08:06:30 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:19.784 08:06:30 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:19.784 08:06:30 -- target/queue_depth.sh@33 -- # waitforlisten 85274 /var/tmp/bdevperf.sock 00:15:19.784 08:06:30 -- common/autotest_common.sh@829 -- # '[' -z 85274 ']' 00:15:19.784 08:06:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:19.784 08:06:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:19.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:19.784 08:06:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:19.784 08:06:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:19.784 08:06:30 -- common/autotest_common.sh@10 -- # set +x 00:15:19.784 [2024-12-07 08:06:30.950586] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:19.784 [2024-12-07 08:06:30.950669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85274 ] 00:15:20.043 [2024-12-07 08:06:31.088456] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.043 [2024-12-07 08:06:31.163223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.977 08:06:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:20.977 08:06:31 -- common/autotest_common.sh@862 -- # return 0 00:15:20.977 08:06:31 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:20.977 08:06:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.977 08:06:31 -- common/autotest_common.sh@10 -- # set +x 00:15:20.977 NVMe0n1 00:15:20.977 08:06:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.977 08:06:32 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:20.977 Running I/O for 10 seconds... 00:15:30.953 00:15:30.953 Latency(us) 00:15:30.953 [2024-12-07T08:06:42.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.953 [2024-12-07T08:06:42.229Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:30.953 Verification LBA range: start 0x0 length 0x4000 00:15:30.953 NVMe0n1 : 10.05 15949.27 62.30 0.00 0.00 63994.49 12749.73 51713.86 00:15:30.953 [2024-12-07T08:06:42.229Z] =================================================================================================================== 00:15:30.953 [2024-12-07T08:06:42.229Z] Total : 15949.27 62.30 0.00 0.00 63994.49 12749.73 51713.86 00:15:30.953 0 00:15:30.953 08:06:42 -- target/queue_depth.sh@39 -- # killprocess 85274 00:15:30.953 08:06:42 -- common/autotest_common.sh@936 -- # '[' -z 85274 ']' 00:15:30.953 08:06:42 -- common/autotest_common.sh@940 -- # kill -0 85274 00:15:30.953 08:06:42 -- common/autotest_common.sh@941 -- # uname 00:15:30.953 08:06:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:30.953 08:06:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85274 00:15:30.953 08:06:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:31.213 killing process with pid 85274 00:15:31.213 08:06:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:31.213 08:06:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85274' 00:15:31.213 08:06:42 -- common/autotest_common.sh@955 -- # kill 85274 00:15:31.213 Received shutdown signal, test time was about 10.000000 seconds 00:15:31.213 00:15:31.213 Latency(us) 00:15:31.213 [2024-12-07T08:06:42.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.213 [2024-12-07T08:06:42.489Z] =================================================================================================================== 00:15:31.213 [2024-12-07T08:06:42.489Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:31.213 08:06:42 -- common/autotest_common.sh@960 -- # wait 85274 00:15:31.213 08:06:42 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:31.213 08:06:42 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:31.213 08:06:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:31.213 08:06:42 -- nvmf/common.sh@116 -- # sync 00:15:31.472 08:06:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:31.472 08:06:42 -- nvmf/common.sh@119 -- # set +e 00:15:31.472 08:06:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:31.472 08:06:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:31.472 rmmod nvme_tcp 00:15:31.472 rmmod nvme_fabrics 00:15:31.472 rmmod nvme_keyring 00:15:31.472 08:06:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:31.472 08:06:42 -- nvmf/common.sh@123 -- # set -e 00:15:31.472 08:06:42 -- nvmf/common.sh@124 -- # return 0 00:15:31.472 08:06:42 -- nvmf/common.sh@477 -- # '[' -n 85224 ']' 00:15:31.472 08:06:42 -- nvmf/common.sh@478 -- # killprocess 85224 00:15:31.472 08:06:42 -- common/autotest_common.sh@936 -- # '[' -z 85224 ']' 00:15:31.472 08:06:42 -- common/autotest_common.sh@940 -- # kill -0 85224 00:15:31.472 08:06:42 -- common/autotest_common.sh@941 -- # uname 00:15:31.472 08:06:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:31.472 08:06:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85224 00:15:31.472 08:06:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:31.472 killing process with pid 85224 00:15:31.472 08:06:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:31.472 08:06:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85224' 00:15:31.472 08:06:42 -- common/autotest_common.sh@955 -- # kill 85224 00:15:31.472 08:06:42 -- common/autotest_common.sh@960 -- # wait 85224 00:15:31.731 08:06:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:31.731 08:06:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:31.731 08:06:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:31.731 08:06:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:31.731 08:06:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:31.731 08:06:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.731 08:06:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.731 08:06:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.731 08:06:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:31.731 ************************************ 00:15:31.731 END TEST nvmf_queue_depth 00:15:31.731 00:15:31.731 real 0m13.619s 00:15:31.731 user 0m23.122s 00:15:31.731 sys 0m2.144s 00:15:31.731 08:06:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:31.731 08:06:42 -- common/autotest_common.sh@10 -- # set +x 00:15:31.731 ************************************ 00:15:31.731 08:06:42 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:31.731 08:06:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:31.731 08:06:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:31.731 08:06:42 -- common/autotest_common.sh@10 -- # set +x 00:15:31.731 ************************************ 00:15:31.731 START TEST nvmf_multipath 00:15:31.731 ************************************ 00:15:31.731 08:06:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:31.731 * Looking for test storage... 00:15:31.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:31.731 08:06:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:31.731 08:06:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:31.731 08:06:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:31.992 08:06:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:31.992 08:06:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:31.992 08:06:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:31.992 08:06:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:31.992 08:06:43 -- scripts/common.sh@335 -- # IFS=.-: 00:15:31.992 08:06:43 -- scripts/common.sh@335 -- # read -ra ver1 00:15:31.992 08:06:43 -- scripts/common.sh@336 -- # IFS=.-: 00:15:31.992 08:06:43 -- scripts/common.sh@336 -- # read -ra ver2 00:15:31.992 08:06:43 -- scripts/common.sh@337 -- # local 'op=<' 00:15:31.992 08:06:43 -- scripts/common.sh@339 -- # ver1_l=2 00:15:31.992 08:06:43 -- scripts/common.sh@340 -- # ver2_l=1 00:15:31.992 08:06:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:31.992 08:06:43 -- scripts/common.sh@343 -- # case "$op" in 00:15:31.992 08:06:43 -- scripts/common.sh@344 -- # : 1 00:15:31.992 08:06:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:31.992 08:06:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:31.992 08:06:43 -- scripts/common.sh@364 -- # decimal 1 00:15:31.992 08:06:43 -- scripts/common.sh@352 -- # local d=1 00:15:31.992 08:06:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:31.992 08:06:43 -- scripts/common.sh@354 -- # echo 1 00:15:31.992 08:06:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:31.992 08:06:43 -- scripts/common.sh@365 -- # decimal 2 00:15:31.992 08:06:43 -- scripts/common.sh@352 -- # local d=2 00:15:31.992 08:06:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:31.992 08:06:43 -- scripts/common.sh@354 -- # echo 2 00:15:31.992 08:06:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:31.992 08:06:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:31.992 08:06:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:31.992 08:06:43 -- scripts/common.sh@367 -- # return 0 00:15:31.992 08:06:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:31.992 08:06:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:31.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.992 --rc genhtml_branch_coverage=1 00:15:31.992 --rc genhtml_function_coverage=1 00:15:31.992 --rc genhtml_legend=1 00:15:31.992 --rc geninfo_all_blocks=1 00:15:31.992 --rc geninfo_unexecuted_blocks=1 00:15:31.992 00:15:31.992 ' 00:15:31.992 08:06:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:31.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.992 --rc genhtml_branch_coverage=1 00:15:31.992 --rc genhtml_function_coverage=1 00:15:31.992 --rc genhtml_legend=1 00:15:31.992 --rc geninfo_all_blocks=1 00:15:31.992 --rc geninfo_unexecuted_blocks=1 00:15:31.992 00:15:31.992 ' 00:15:31.992 08:06:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:31.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.992 --rc genhtml_branch_coverage=1 00:15:31.992 --rc genhtml_function_coverage=1 00:15:31.992 --rc genhtml_legend=1 00:15:31.992 --rc geninfo_all_blocks=1 00:15:31.992 --rc geninfo_unexecuted_blocks=1 00:15:31.992 00:15:31.992 ' 00:15:31.992 08:06:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:31.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.992 --rc genhtml_branch_coverage=1 00:15:31.992 --rc genhtml_function_coverage=1 00:15:31.992 --rc genhtml_legend=1 00:15:31.992 --rc geninfo_all_blocks=1 00:15:31.992 --rc geninfo_unexecuted_blocks=1 00:15:31.992 00:15:31.992 ' 00:15:31.992 08:06:43 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:31.992 08:06:43 -- nvmf/common.sh@7 -- # uname -s 00:15:31.992 08:06:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.992 08:06:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.992 08:06:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.992 08:06:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.992 08:06:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.992 08:06:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.992 08:06:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.992 08:06:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.992 08:06:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.992 08:06:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.992 08:06:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:15:31.992 08:06:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:15:31.992 08:06:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.992 08:06:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.992 08:06:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:31.992 08:06:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:31.992 08:06:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.992 08:06:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.992 08:06:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.992 08:06:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.992 08:06:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.992 08:06:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.992 08:06:43 -- paths/export.sh@5 -- # export PATH 00:15:31.993 08:06:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.993 08:06:43 -- nvmf/common.sh@46 -- # : 0 00:15:31.993 08:06:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:31.993 08:06:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:31.993 08:06:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:31.993 08:06:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.993 08:06:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.993 08:06:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:31.993 08:06:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:31.993 08:06:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:31.993 08:06:43 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:31.993 08:06:43 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:31.993 08:06:43 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:31.993 08:06:43 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:31.993 08:06:43 -- target/multipath.sh@43 -- # nvmftestinit 00:15:31.993 08:06:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:31.993 08:06:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.993 08:06:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:31.993 08:06:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:31.993 08:06:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:31.993 08:06:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.993 08:06:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.993 08:06:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.993 08:06:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:31.993 08:06:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:31.993 08:06:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:31.993 08:06:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:31.993 08:06:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:31.993 08:06:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:31.993 08:06:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.993 08:06:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:31.993 08:06:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:31.993 08:06:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:31.993 08:06:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:31.993 08:06:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:31.993 08:06:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:31.993 08:06:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.993 08:06:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:31.993 08:06:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:31.993 08:06:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:31.993 08:06:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:31.993 08:06:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:31.993 08:06:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:31.993 Cannot find device "nvmf_tgt_br" 00:15:31.993 08:06:43 -- nvmf/common.sh@154 -- # true 00:15:31.993 08:06:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:31.993 Cannot find device "nvmf_tgt_br2" 00:15:31.993 08:06:43 -- nvmf/common.sh@155 -- # true 00:15:31.993 08:06:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:31.993 08:06:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:31.993 Cannot find device "nvmf_tgt_br" 00:15:31.993 08:06:43 -- nvmf/common.sh@157 -- # true 00:15:31.993 08:06:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:31.993 Cannot find device "nvmf_tgt_br2" 00:15:31.993 08:06:43 -- nvmf/common.sh@158 -- # true 00:15:31.993 08:06:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:31.993 08:06:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:31.993 08:06:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:31.993 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.993 08:06:43 -- nvmf/common.sh@161 -- # true 00:15:31.993 08:06:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:31.993 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.993 08:06:43 -- nvmf/common.sh@162 -- # true 00:15:31.993 08:06:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:31.993 08:06:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:31.993 08:06:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:31.993 08:06:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:31.993 08:06:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:32.253 08:06:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:32.253 08:06:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:32.253 08:06:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:32.253 08:06:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:32.253 08:06:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:32.253 08:06:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:32.253 08:06:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:32.253 08:06:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:32.253 08:06:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:32.253 08:06:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:32.253 08:06:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:32.253 08:06:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:32.253 08:06:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:32.253 08:06:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:32.253 08:06:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:32.253 08:06:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:32.253 08:06:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:32.253 08:06:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:32.253 08:06:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:32.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:15:32.253 00:15:32.253 --- 10.0.0.2 ping statistics --- 00:15:32.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.253 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:15:32.253 08:06:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:32.253 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:32.253 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:15:32.253 00:15:32.253 --- 10.0.0.3 ping statistics --- 00:15:32.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.253 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:32.253 08:06:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:32.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:32.253 00:15:32.253 --- 10.0.0.1 ping statistics --- 00:15:32.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.253 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:32.253 08:06:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.253 08:06:43 -- nvmf/common.sh@421 -- # return 0 00:15:32.253 08:06:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:32.253 08:06:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.253 08:06:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:32.253 08:06:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:32.253 08:06:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.253 08:06:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:32.253 08:06:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:32.253 08:06:43 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:32.253 08:06:43 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:32.253 08:06:43 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:32.253 08:06:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:32.253 08:06:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:32.253 08:06:43 -- common/autotest_common.sh@10 -- # set +x 00:15:32.253 08:06:43 -- nvmf/common.sh@469 -- # nvmfpid=85611 00:15:32.253 08:06:43 -- nvmf/common.sh@470 -- # waitforlisten 85611 00:15:32.253 08:06:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:32.253 08:06:43 -- common/autotest_common.sh@829 -- # '[' -z 85611 ']' 00:15:32.253 08:06:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.253 08:06:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:32.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.253 08:06:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.253 08:06:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:32.253 08:06:43 -- common/autotest_common.sh@10 -- # set +x 00:15:32.253 [2024-12-07 08:06:43.490610] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:32.253 [2024-12-07 08:06:43.490718] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.513 [2024-12-07 08:06:43.632364] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:32.513 [2024-12-07 08:06:43.705652] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:32.513 [2024-12-07 08:06:43.705835] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.513 [2024-12-07 08:06:43.705848] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.513 [2024-12-07 08:06:43.705856] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.513 [2024-12-07 08:06:43.706525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.513 [2024-12-07 08:06:43.706758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.513 [2024-12-07 08:06:43.706855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:32.513 [2024-12-07 08:06:43.706871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.450 08:06:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:33.450 08:06:44 -- common/autotest_common.sh@862 -- # return 0 00:15:33.450 08:06:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:33.450 08:06:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:33.450 08:06:44 -- common/autotest_common.sh@10 -- # set +x 00:15:33.450 08:06:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.450 08:06:44 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:33.709 [2024-12-07 08:06:44.847284] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.709 08:06:44 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:33.967 Malloc0 00:15:33.967 08:06:45 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:34.226 08:06:45 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:34.483 08:06:45 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:34.741 [2024-12-07 08:06:45.879524] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.741 08:06:45 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:34.999 [2024-12-07 08:06:46.107768] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:34.999 08:06:46 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:35.257 08:06:46 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:35.516 08:06:46 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:35.516 08:06:46 -- common/autotest_common.sh@1187 -- # local i=0 00:15:35.516 08:06:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:35.516 08:06:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:35.516 08:06:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:37.419 08:06:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:37.419 08:06:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:37.419 08:06:48 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:37.419 08:06:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:37.419 08:06:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:37.419 08:06:48 -- common/autotest_common.sh@1197 -- # return 0 00:15:37.419 08:06:48 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:37.419 08:06:48 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:37.419 08:06:48 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:37.419 08:06:48 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:37.419 08:06:48 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:37.419 08:06:48 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:37.419 08:06:48 -- target/multipath.sh@38 -- # return 0 00:15:37.419 08:06:48 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:37.419 08:06:48 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:37.419 08:06:48 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:37.419 08:06:48 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:37.419 08:06:48 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:37.419 08:06:48 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:37.419 08:06:48 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:37.419 08:06:48 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:37.419 08:06:48 -- target/multipath.sh@22 -- # local timeout=20 00:15:37.419 08:06:48 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:37.419 08:06:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:37.419 08:06:48 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:37.419 08:06:48 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:37.419 08:06:48 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:37.419 08:06:48 -- target/multipath.sh@22 -- # local timeout=20 00:15:37.419 08:06:48 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:37.419 08:06:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:37.419 08:06:48 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:37.419 08:06:48 -- target/multipath.sh@85 -- # echo numa 00:15:37.419 08:06:48 -- target/multipath.sh@88 -- # fio_pid=85750 00:15:37.419 08:06:48 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:37.419 08:06:48 -- target/multipath.sh@90 -- # sleep 1 00:15:37.419 [global] 00:15:37.419 thread=1 00:15:37.419 invalidate=1 00:15:37.419 rw=randrw 00:15:37.419 time_based=1 00:15:37.419 runtime=6 00:15:37.419 ioengine=libaio 00:15:37.419 direct=1 00:15:37.419 bs=4096 00:15:37.419 iodepth=128 00:15:37.419 norandommap=0 00:15:37.419 numjobs=1 00:15:37.419 00:15:37.419 verify_dump=1 00:15:37.419 verify_backlog=512 00:15:37.419 verify_state_save=0 00:15:37.419 do_verify=1 00:15:37.419 verify=crc32c-intel 00:15:37.419 [job0] 00:15:37.419 filename=/dev/nvme0n1 00:15:37.419 Could not set queue depth (nvme0n1) 00:15:37.678 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:37.678 fio-3.35 00:15:37.678 Starting 1 thread 00:15:38.614 08:06:49 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:38.614 08:06:49 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:38.873 08:06:50 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:38.873 08:06:50 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:38.873 08:06:50 -- target/multipath.sh@22 -- # local timeout=20 00:15:38.873 08:06:50 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:38.873 08:06:50 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:38.873 08:06:50 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:38.873 08:06:50 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:38.873 08:06:50 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:38.873 08:06:50 -- target/multipath.sh@22 -- # local timeout=20 00:15:38.873 08:06:50 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:38.873 08:06:50 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:38.873 08:06:50 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:38.873 08:06:50 -- target/multipath.sh@25 -- # sleep 1s 00:15:40.247 08:06:51 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:40.247 08:06:51 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:40.247 08:06:51 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:40.247 08:06:51 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:40.247 08:06:51 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:40.506 08:06:51 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:40.506 08:06:51 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:40.506 08:06:51 -- target/multipath.sh@22 -- # local timeout=20 00:15:40.506 08:06:51 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:40.506 08:06:51 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:40.506 08:06:51 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:40.506 08:06:51 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:40.506 08:06:51 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:40.506 08:06:51 -- target/multipath.sh@22 -- # local timeout=20 00:15:40.506 08:06:51 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:40.506 08:06:51 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:40.506 08:06:51 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:40.506 08:06:51 -- target/multipath.sh@25 -- # sleep 1s 00:15:41.442 08:06:52 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:41.442 08:06:52 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:41.442 08:06:52 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:41.442 08:06:52 -- target/multipath.sh@104 -- # wait 85750 00:15:43.993 00:15:43.993 job0: (groupid=0, jobs=1): err= 0: pid=85772: Sat Dec 7 08:06:54 2024 00:15:43.993 read: IOPS=11.9k, BW=46.4MiB/s (48.6MB/s)(278MiB/6001msec) 00:15:43.993 slat (usec): min=2, max=8413, avg=47.99, stdev=218.16 00:15:43.993 clat (usec): min=2097, max=14640, avg=7317.68, stdev=1124.22 00:15:43.993 lat (usec): min=2106, max=14673, avg=7365.67, stdev=1133.13 00:15:43.993 clat percentiles (usec): 00:15:43.993 | 1.00th=[ 4424], 5.00th=[ 5735], 10.00th=[ 6194], 20.00th=[ 6521], 00:15:43.993 | 30.00th=[ 6718], 40.00th=[ 6915], 50.00th=[ 7242], 60.00th=[ 7504], 00:15:43.993 | 70.00th=[ 7832], 80.00th=[ 8094], 90.00th=[ 8586], 95.00th=[ 9110], 00:15:43.993 | 99.00th=[10814], 99.50th=[11207], 99.90th=[11863], 99.95th=[12256], 00:15:43.993 | 99.99th=[12649] 00:15:43.993 bw ( KiB/s): min=12896, max=30656, per=53.07%, avg=25196.36, stdev=5635.46, samples=11 00:15:43.993 iops : min= 3224, max= 7664, avg=6299.09, stdev=1408.87, samples=11 00:15:43.993 write: IOPS=7021, BW=27.4MiB/s (28.8MB/s)(148MiB/5400msec); 0 zone resets 00:15:43.993 slat (usec): min=3, max=1918, avg=59.00, stdev=149.97 00:15:43.993 clat (usec): min=1129, max=12269, avg=6326.08, stdev=928.61 00:15:43.993 lat (usec): min=1172, max=12310, avg=6385.07, stdev=932.16 00:15:43.993 clat percentiles (usec): 00:15:43.993 | 1.00th=[ 3523], 5.00th=[ 4555], 10.00th=[ 5342], 20.00th=[ 5735], 00:15:43.993 | 30.00th=[ 5997], 40.00th=[ 6194], 50.00th=[ 6390], 60.00th=[ 6587], 00:15:43.993 | 70.00th=[ 6718], 80.00th=[ 6915], 90.00th=[ 7242], 95.00th=[ 7504], 00:15:43.993 | 99.00th=[ 9241], 99.50th=[ 9896], 99.90th=[11207], 99.95th=[11469], 00:15:43.993 | 99.99th=[11863] 00:15:43.993 bw ( KiB/s): min=13408, max=30464, per=89.76%, avg=25211.18, stdev=5362.80, samples=11 00:15:43.993 iops : min= 3352, max= 7616, avg=6302.73, stdev=1340.67, samples=11 00:15:43.993 lat (msec) : 2=0.01%, 4=1.19%, 10=96.96%, 20=1.84% 00:15:43.993 cpu : usr=5.80%, sys=22.54%, ctx=6639, majf=0, minf=102 00:15:43.993 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:15:43.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:43.993 issued rwts: total=71224,37916,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.993 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:43.993 00:15:43.993 Run status group 0 (all jobs): 00:15:43.993 READ: bw=46.4MiB/s (48.6MB/s), 46.4MiB/s-46.4MiB/s (48.6MB/s-48.6MB/s), io=278MiB (292MB), run=6001-6001msec 00:15:43.993 WRITE: bw=27.4MiB/s (28.8MB/s), 27.4MiB/s-27.4MiB/s (28.8MB/s-28.8MB/s), io=148MiB (155MB), run=5400-5400msec 00:15:43.993 00:15:43.993 Disk stats (read/write): 00:15:43.993 nvme0n1: ios=70013/37376, merge=0/0, ticks=479588/220909, in_queue=700497, util=98.63% 00:15:43.993 08:06:54 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:43.993 08:06:55 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:44.250 08:06:55 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:44.250 08:06:55 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:44.250 08:06:55 -- target/multipath.sh@22 -- # local timeout=20 00:15:44.250 08:06:55 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:44.250 08:06:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:44.250 08:06:55 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:44.250 08:06:55 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:44.250 08:06:55 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:44.250 08:06:55 -- target/multipath.sh@22 -- # local timeout=20 00:15:44.250 08:06:55 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:44.250 08:06:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:44.250 08:06:55 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:44.250 08:06:55 -- target/multipath.sh@25 -- # sleep 1s 00:15:45.184 08:06:56 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:45.184 08:06:56 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:45.184 08:06:56 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:45.184 08:06:56 -- target/multipath.sh@113 -- # echo round-robin 00:15:45.184 08:06:56 -- target/multipath.sh@116 -- # fio_pid=85906 00:15:45.184 08:06:56 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:45.184 08:06:56 -- target/multipath.sh@118 -- # sleep 1 00:15:45.457 [global] 00:15:45.457 thread=1 00:15:45.457 invalidate=1 00:15:45.457 rw=randrw 00:15:45.457 time_based=1 00:15:45.457 runtime=6 00:15:45.457 ioengine=libaio 00:15:45.457 direct=1 00:15:45.457 bs=4096 00:15:45.457 iodepth=128 00:15:45.457 norandommap=0 00:15:45.457 numjobs=1 00:15:45.457 00:15:45.457 verify_dump=1 00:15:45.457 verify_backlog=512 00:15:45.457 verify_state_save=0 00:15:45.457 do_verify=1 00:15:45.457 verify=crc32c-intel 00:15:45.457 [job0] 00:15:45.457 filename=/dev/nvme0n1 00:15:45.457 Could not set queue depth (nvme0n1) 00:15:45.457 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:45.457 fio-3.35 00:15:45.457 Starting 1 thread 00:15:46.389 08:06:57 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:46.647 08:06:57 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:46.904 08:06:58 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:46.904 08:06:58 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:46.904 08:06:58 -- target/multipath.sh@22 -- # local timeout=20 00:15:46.904 08:06:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:46.904 08:06:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:46.904 08:06:58 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:46.904 08:06:58 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:46.904 08:06:58 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:46.904 08:06:58 -- target/multipath.sh@22 -- # local timeout=20 00:15:46.904 08:06:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:46.904 08:06:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:46.904 08:06:58 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:46.904 08:06:58 -- target/multipath.sh@25 -- # sleep 1s 00:15:47.836 08:06:59 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:47.836 08:06:59 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:47.836 08:06:59 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:47.836 08:06:59 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:48.095 08:06:59 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:48.353 08:06:59 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:48.353 08:06:59 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:48.353 08:06:59 -- target/multipath.sh@22 -- # local timeout=20 00:15:48.353 08:06:59 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:48.353 08:06:59 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:48.353 08:06:59 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:48.353 08:06:59 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:48.353 08:06:59 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:48.353 08:06:59 -- target/multipath.sh@22 -- # local timeout=20 00:15:48.353 08:06:59 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:48.353 08:06:59 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:48.353 08:06:59 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:48.353 08:06:59 -- target/multipath.sh@25 -- # sleep 1s 00:15:49.731 08:07:00 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:49.731 08:07:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:49.731 08:07:00 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:49.731 08:07:00 -- target/multipath.sh@132 -- # wait 85906 00:15:51.634 00:15:51.634 job0: (groupid=0, jobs=1): err= 0: pid=85927: Sat Dec 7 08:07:02 2024 00:15:51.634 read: IOPS=13.3k, BW=51.9MiB/s (54.5MB/s)(312MiB/6006msec) 00:15:51.634 slat (usec): min=3, max=6074, avg=38.71, stdev=195.69 00:15:51.634 clat (usec): min=320, max=13664, avg=6745.82, stdev=1447.26 00:15:51.634 lat (usec): min=341, max=13672, avg=6784.52, stdev=1463.18 00:15:51.634 clat percentiles (usec): 00:15:51.634 | 1.00th=[ 3359], 5.00th=[ 4228], 10.00th=[ 4752], 20.00th=[ 5604], 00:15:51.634 | 30.00th=[ 6259], 40.00th=[ 6587], 50.00th=[ 6783], 60.00th=[ 7046], 00:15:51.634 | 70.00th=[ 7373], 80.00th=[ 7832], 90.00th=[ 8455], 95.00th=[ 8979], 00:15:51.634 | 99.00th=[10814], 99.50th=[11076], 99.90th=[11863], 99.95th=[12518], 00:15:51.634 | 99.99th=[13304] 00:15:51.634 bw ( KiB/s): min=15288, max=44264, per=51.49%, avg=27388.36, stdev=9489.05, samples=11 00:15:51.634 iops : min= 3822, max=11066, avg=6847.09, stdev=2372.26, samples=11 00:15:51.634 write: IOPS=7726, BW=30.2MiB/s (31.6MB/s)(157MiB/5211msec); 0 zone resets 00:15:51.634 slat (usec): min=11, max=2741, avg=49.46, stdev=128.21 00:15:51.634 clat (usec): min=316, max=12188, avg=5546.36, stdev=1471.36 00:15:51.634 lat (usec): min=354, max=12219, avg=5595.82, stdev=1484.83 00:15:51.634 clat percentiles (usec): 00:15:51.634 | 1.00th=[ 2540], 5.00th=[ 3097], 10.00th=[ 3458], 20.00th=[ 3982], 00:15:51.634 | 30.00th=[ 4621], 40.00th=[ 5473], 50.00th=[ 5932], 60.00th=[ 6194], 00:15:51.634 | 70.00th=[ 6456], 80.00th=[ 6718], 90.00th=[ 7111], 95.00th=[ 7373], 00:15:51.634 | 99.00th=[ 9110], 99.50th=[ 9765], 99.90th=[10945], 99.95th=[11469], 00:15:51.634 | 99.99th=[11863] 00:15:51.634 bw ( KiB/s): min=15200, max=43640, per=88.49%, avg=27349.09, stdev=9385.49, samples=11 00:15:51.634 iops : min= 3800, max=10910, avg=6837.27, stdev=2346.37, samples=11 00:15:51.634 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:15:51.634 lat (msec) : 2=0.09%, 4=8.98%, 10=89.25%, 20=1.66% 00:15:51.634 cpu : usr=5.98%, sys=24.16%, ctx=7472, majf=0, minf=78 00:15:51.634 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:51.634 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:51.634 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:51.634 issued rwts: total=79860,40264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:51.634 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:51.634 00:15:51.634 Run status group 0 (all jobs): 00:15:51.634 READ: bw=51.9MiB/s (54.5MB/s), 51.9MiB/s-51.9MiB/s (54.5MB/s-54.5MB/s), io=312MiB (327MB), run=6006-6006msec 00:15:51.634 WRITE: bw=30.2MiB/s (31.6MB/s), 30.2MiB/s-30.2MiB/s (31.6MB/s-31.6MB/s), io=157MiB (165MB), run=5211-5211msec 00:15:51.634 00:15:51.634 Disk stats (read/write): 00:15:51.634 nvme0n1: ios=78980/39543, merge=0/0, ticks=494256/201679, in_queue=695935, util=98.61% 00:15:51.634 08:07:02 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:51.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:51.634 08:07:02 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:51.634 08:07:02 -- common/autotest_common.sh@1208 -- # local i=0 00:15:51.634 08:07:02 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:51.634 08:07:02 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:51.634 08:07:02 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:51.634 08:07:02 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:51.634 08:07:02 -- common/autotest_common.sh@1220 -- # return 0 00:15:51.634 08:07:02 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:51.894 08:07:03 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:51.894 08:07:03 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:51.894 08:07:03 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:51.894 08:07:03 -- target/multipath.sh@144 -- # nvmftestfini 00:15:51.894 08:07:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:51.894 08:07:03 -- nvmf/common.sh@116 -- # sync 00:15:52.153 08:07:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:52.153 08:07:03 -- nvmf/common.sh@119 -- # set +e 00:15:52.153 08:07:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:52.153 08:07:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:52.153 rmmod nvme_tcp 00:15:52.153 rmmod nvme_fabrics 00:15:52.153 rmmod nvme_keyring 00:15:52.153 08:07:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:52.153 08:07:03 -- nvmf/common.sh@123 -- # set -e 00:15:52.153 08:07:03 -- nvmf/common.sh@124 -- # return 0 00:15:52.153 08:07:03 -- nvmf/common.sh@477 -- # '[' -n 85611 ']' 00:15:52.153 08:07:03 -- nvmf/common.sh@478 -- # killprocess 85611 00:15:52.153 08:07:03 -- common/autotest_common.sh@936 -- # '[' -z 85611 ']' 00:15:52.153 08:07:03 -- common/autotest_common.sh@940 -- # kill -0 85611 00:15:52.153 08:07:03 -- common/autotest_common.sh@941 -- # uname 00:15:52.153 08:07:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:52.153 08:07:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85611 00:15:52.153 killing process with pid 85611 00:15:52.154 08:07:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:52.154 08:07:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:52.154 08:07:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85611' 00:15:52.154 08:07:03 -- common/autotest_common.sh@955 -- # kill 85611 00:15:52.154 08:07:03 -- common/autotest_common.sh@960 -- # wait 85611 00:15:52.412 08:07:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:52.412 08:07:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:52.412 08:07:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:52.412 08:07:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:52.412 08:07:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:52.412 08:07:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.412 08:07:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.412 08:07:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.412 08:07:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:52.412 00:15:52.412 real 0m20.682s 00:15:52.412 user 1m20.661s 00:15:52.412 sys 0m6.929s 00:15:52.412 08:07:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:52.412 08:07:03 -- common/autotest_common.sh@10 -- # set +x 00:15:52.413 ************************************ 00:15:52.413 END TEST nvmf_multipath 00:15:52.413 ************************************ 00:15:52.413 08:07:03 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:52.413 08:07:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:52.413 08:07:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:52.413 08:07:03 -- common/autotest_common.sh@10 -- # set +x 00:15:52.413 ************************************ 00:15:52.413 START TEST nvmf_zcopy 00:15:52.413 ************************************ 00:15:52.413 08:07:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:52.672 * Looking for test storage... 00:15:52.672 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:52.672 08:07:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:52.672 08:07:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:52.673 08:07:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:52.673 08:07:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:52.673 08:07:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:52.673 08:07:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:52.673 08:07:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:52.673 08:07:03 -- scripts/common.sh@335 -- # IFS=.-: 00:15:52.673 08:07:03 -- scripts/common.sh@335 -- # read -ra ver1 00:15:52.673 08:07:03 -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.673 08:07:03 -- scripts/common.sh@336 -- # read -ra ver2 00:15:52.673 08:07:03 -- scripts/common.sh@337 -- # local 'op=<' 00:15:52.673 08:07:03 -- scripts/common.sh@339 -- # ver1_l=2 00:15:52.673 08:07:03 -- scripts/common.sh@340 -- # ver2_l=1 00:15:52.673 08:07:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:52.673 08:07:03 -- scripts/common.sh@343 -- # case "$op" in 00:15:52.673 08:07:03 -- scripts/common.sh@344 -- # : 1 00:15:52.673 08:07:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:52.673 08:07:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.673 08:07:03 -- scripts/common.sh@364 -- # decimal 1 00:15:52.673 08:07:03 -- scripts/common.sh@352 -- # local d=1 00:15:52.673 08:07:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.673 08:07:03 -- scripts/common.sh@354 -- # echo 1 00:15:52.673 08:07:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:52.673 08:07:03 -- scripts/common.sh@365 -- # decimal 2 00:15:52.673 08:07:03 -- scripts/common.sh@352 -- # local d=2 00:15:52.673 08:07:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.673 08:07:03 -- scripts/common.sh@354 -- # echo 2 00:15:52.673 08:07:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:52.673 08:07:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:52.673 08:07:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:52.673 08:07:03 -- scripts/common.sh@367 -- # return 0 00:15:52.673 08:07:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.673 08:07:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:52.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.673 --rc genhtml_branch_coverage=1 00:15:52.673 --rc genhtml_function_coverage=1 00:15:52.673 --rc genhtml_legend=1 00:15:52.673 --rc geninfo_all_blocks=1 00:15:52.673 --rc geninfo_unexecuted_blocks=1 00:15:52.673 00:15:52.673 ' 00:15:52.673 08:07:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:52.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.673 --rc genhtml_branch_coverage=1 00:15:52.673 --rc genhtml_function_coverage=1 00:15:52.673 --rc genhtml_legend=1 00:15:52.673 --rc geninfo_all_blocks=1 00:15:52.673 --rc geninfo_unexecuted_blocks=1 00:15:52.673 00:15:52.673 ' 00:15:52.673 08:07:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:52.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.673 --rc genhtml_branch_coverage=1 00:15:52.673 --rc genhtml_function_coverage=1 00:15:52.673 --rc genhtml_legend=1 00:15:52.673 --rc geninfo_all_blocks=1 00:15:52.673 --rc geninfo_unexecuted_blocks=1 00:15:52.673 00:15:52.673 ' 00:15:52.673 08:07:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:52.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.673 --rc genhtml_branch_coverage=1 00:15:52.673 --rc genhtml_function_coverage=1 00:15:52.673 --rc genhtml_legend=1 00:15:52.673 --rc geninfo_all_blocks=1 00:15:52.673 --rc geninfo_unexecuted_blocks=1 00:15:52.673 00:15:52.673 ' 00:15:52.673 08:07:03 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:52.673 08:07:03 -- nvmf/common.sh@7 -- # uname -s 00:15:52.673 08:07:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.673 08:07:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.673 08:07:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.673 08:07:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.673 08:07:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.673 08:07:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.673 08:07:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.673 08:07:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.673 08:07:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.673 08:07:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.673 08:07:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:15:52.673 08:07:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:15:52.673 08:07:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.673 08:07:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.673 08:07:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:52.673 08:07:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:52.673 08:07:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.673 08:07:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.673 08:07:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.673 08:07:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.673 08:07:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.673 08:07:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.673 08:07:03 -- paths/export.sh@5 -- # export PATH 00:15:52.673 08:07:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.673 08:07:03 -- nvmf/common.sh@46 -- # : 0 00:15:52.673 08:07:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:52.673 08:07:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:52.673 08:07:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:52.673 08:07:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.673 08:07:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.673 08:07:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:52.673 08:07:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:52.673 08:07:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:52.673 08:07:03 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:52.673 08:07:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:52.673 08:07:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.673 08:07:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:52.673 08:07:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:52.673 08:07:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:52.673 08:07:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.673 08:07:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.673 08:07:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.673 08:07:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:52.673 08:07:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:52.673 08:07:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:52.673 08:07:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:52.673 08:07:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:52.673 08:07:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:52.673 08:07:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.673 08:07:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:52.673 08:07:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:52.673 08:07:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:52.673 08:07:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:52.673 08:07:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:52.673 08:07:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:52.673 08:07:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.673 08:07:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:52.673 08:07:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:52.673 08:07:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:52.673 08:07:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:52.673 08:07:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:52.673 08:07:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:52.673 Cannot find device "nvmf_tgt_br" 00:15:52.673 08:07:03 -- nvmf/common.sh@154 -- # true 00:15:52.673 08:07:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:52.673 Cannot find device "nvmf_tgt_br2" 00:15:52.673 08:07:03 -- nvmf/common.sh@155 -- # true 00:15:52.673 08:07:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:52.673 08:07:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:52.673 Cannot find device "nvmf_tgt_br" 00:15:52.673 08:07:03 -- nvmf/common.sh@157 -- # true 00:15:52.673 08:07:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:52.673 Cannot find device "nvmf_tgt_br2" 00:15:52.673 08:07:03 -- nvmf/common.sh@158 -- # true 00:15:52.674 08:07:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:52.933 08:07:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:52.933 08:07:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:52.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.933 08:07:03 -- nvmf/common.sh@161 -- # true 00:15:52.933 08:07:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:52.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.933 08:07:03 -- nvmf/common.sh@162 -- # true 00:15:52.933 08:07:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:52.933 08:07:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:52.933 08:07:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:52.933 08:07:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:52.933 08:07:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:52.933 08:07:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:52.933 08:07:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:52.933 08:07:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:52.933 08:07:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:52.933 08:07:04 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:52.933 08:07:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:52.933 08:07:04 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:52.933 08:07:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:52.933 08:07:04 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:52.933 08:07:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:52.933 08:07:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:52.933 08:07:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:52.933 08:07:04 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:52.933 08:07:04 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:52.933 08:07:04 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:52.933 08:07:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:52.933 08:07:04 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:52.933 08:07:04 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:52.933 08:07:04 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:52.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:15:52.933 00:15:52.933 --- 10.0.0.2 ping statistics --- 00:15:52.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.933 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:15:52.933 08:07:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:52.933 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:52.933 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:15:52.933 00:15:52.933 --- 10.0.0.3 ping statistics --- 00:15:52.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.933 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:52.933 08:07:04 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:52.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:52.933 00:15:52.933 --- 10.0.0.1 ping statistics --- 00:15:52.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.933 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:52.933 08:07:04 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.933 08:07:04 -- nvmf/common.sh@421 -- # return 0 00:15:52.933 08:07:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:52.933 08:07:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.933 08:07:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:52.934 08:07:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:52.934 08:07:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.934 08:07:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:52.934 08:07:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:52.934 08:07:04 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:52.934 08:07:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:52.934 08:07:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:52.934 08:07:04 -- common/autotest_common.sh@10 -- # set +x 00:15:52.934 08:07:04 -- nvmf/common.sh@469 -- # nvmfpid=86215 00:15:52.934 08:07:04 -- nvmf/common.sh@470 -- # waitforlisten 86215 00:15:52.934 08:07:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:52.934 08:07:04 -- common/autotest_common.sh@829 -- # '[' -z 86215 ']' 00:15:52.934 08:07:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.934 08:07:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.934 08:07:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.934 08:07:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.934 08:07:04 -- common/autotest_common.sh@10 -- # set +x 00:15:53.193 [2024-12-07 08:07:04.247623] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:53.193 [2024-12-07 08:07:04.247747] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.193 [2024-12-07 08:07:04.392397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.453 [2024-12-07 08:07:04.478820] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:53.453 [2024-12-07 08:07:04.479002] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.453 [2024-12-07 08:07:04.479018] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.453 [2024-12-07 08:07:04.479029] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.453 [2024-12-07 08:07:04.479065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.020 08:07:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:54.020 08:07:05 -- common/autotest_common.sh@862 -- # return 0 00:15:54.020 08:07:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:54.020 08:07:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:54.020 08:07:05 -- common/autotest_common.sh@10 -- # set +x 00:15:54.278 08:07:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.278 08:07:05 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:54.278 08:07:05 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:54.278 08:07:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.278 08:07:05 -- common/autotest_common.sh@10 -- # set +x 00:15:54.278 [2024-12-07 08:07:05.340668] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.278 08:07:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.278 08:07:05 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:54.278 08:07:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.278 08:07:05 -- common/autotest_common.sh@10 -- # set +x 00:15:54.278 08:07:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.278 08:07:05 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.278 08:07:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.278 08:07:05 -- common/autotest_common.sh@10 -- # set +x 00:15:54.278 [2024-12-07 08:07:05.356751] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.278 08:07:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.278 08:07:05 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:54.278 08:07:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.278 08:07:05 -- common/autotest_common.sh@10 -- # set +x 00:15:54.278 08:07:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.278 08:07:05 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:54.278 08:07:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.278 08:07:05 -- common/autotest_common.sh@10 -- # set +x 00:15:54.278 malloc0 00:15:54.278 08:07:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.278 08:07:05 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:54.278 08:07:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.278 08:07:05 -- common/autotest_common.sh@10 -- # set +x 00:15:54.278 08:07:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.278 08:07:05 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:54.278 08:07:05 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:54.278 08:07:05 -- nvmf/common.sh@520 -- # config=() 00:15:54.278 08:07:05 -- nvmf/common.sh@520 -- # local subsystem config 00:15:54.278 08:07:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:54.278 08:07:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:54.278 { 00:15:54.278 "params": { 00:15:54.278 "name": "Nvme$subsystem", 00:15:54.278 "trtype": "$TEST_TRANSPORT", 00:15:54.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:54.278 "adrfam": "ipv4", 00:15:54.278 "trsvcid": "$NVMF_PORT", 00:15:54.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:54.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:54.278 "hdgst": ${hdgst:-false}, 00:15:54.278 "ddgst": ${ddgst:-false} 00:15:54.278 }, 00:15:54.278 "method": "bdev_nvme_attach_controller" 00:15:54.278 } 00:15:54.278 EOF 00:15:54.278 )") 00:15:54.278 08:07:05 -- nvmf/common.sh@542 -- # cat 00:15:54.278 08:07:05 -- nvmf/common.sh@544 -- # jq . 00:15:54.278 08:07:05 -- nvmf/common.sh@545 -- # IFS=, 00:15:54.278 08:07:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:54.278 "params": { 00:15:54.278 "name": "Nvme1", 00:15:54.278 "trtype": "tcp", 00:15:54.278 "traddr": "10.0.0.2", 00:15:54.278 "adrfam": "ipv4", 00:15:54.278 "trsvcid": "4420", 00:15:54.278 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:54.278 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:54.278 "hdgst": false, 00:15:54.278 "ddgst": false 00:15:54.278 }, 00:15:54.278 "method": "bdev_nvme_attach_controller" 00:15:54.278 }' 00:15:54.278 [2024-12-07 08:07:05.454297] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:54.278 [2024-12-07 08:07:05.454424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86267 ] 00:15:54.536 [2024-12-07 08:07:05.597288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.536 [2024-12-07 08:07:05.675743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.795 Running I/O for 10 seconds... 00:16:04.768 00:16:04.768 Latency(us) 00:16:04.768 [2024-12-07T08:07:16.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:04.768 [2024-12-07T08:07:16.044Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:04.768 Verification LBA range: start 0x0 length 0x1000 00:16:04.768 Nvme1n1 : 10.01 10412.26 81.35 0.00 0.00 12262.62 1377.75 18945.86 00:16:04.768 [2024-12-07T08:07:16.044Z] =================================================================================================================== 00:16:04.768 [2024-12-07T08:07:16.044Z] Total : 10412.26 81.35 0.00 0.00 12262.62 1377.75 18945.86 00:16:05.027 08:07:16 -- target/zcopy.sh@39 -- # perfpid=86385 00:16:05.027 08:07:16 -- target/zcopy.sh@41 -- # xtrace_disable 00:16:05.027 08:07:16 -- common/autotest_common.sh@10 -- # set +x 00:16:05.027 08:07:16 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:05.027 08:07:16 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:05.027 08:07:16 -- nvmf/common.sh@520 -- # config=() 00:16:05.027 08:07:16 -- nvmf/common.sh@520 -- # local subsystem config 00:16:05.027 08:07:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:05.027 08:07:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:05.027 { 00:16:05.027 "params": { 00:16:05.027 "name": "Nvme$subsystem", 00:16:05.027 "trtype": "$TEST_TRANSPORT", 00:16:05.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:05.027 "adrfam": "ipv4", 00:16:05.027 "trsvcid": "$NVMF_PORT", 00:16:05.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:05.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:05.027 "hdgst": ${hdgst:-false}, 00:16:05.027 "ddgst": ${ddgst:-false} 00:16:05.027 }, 00:16:05.027 "method": "bdev_nvme_attach_controller" 00:16:05.027 } 00:16:05.027 EOF 00:16:05.027 )") 00:16:05.027 08:07:16 -- nvmf/common.sh@542 -- # cat 00:16:05.028 [2024-12-07 08:07:16.048089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.048148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 08:07:16 -- nvmf/common.sh@544 -- # jq . 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.028 08:07:16 -- nvmf/common.sh@545 -- # IFS=, 00:16:05.028 08:07:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:05.028 "params": { 00:16:05.028 "name": "Nvme1", 00:16:05.028 "trtype": "tcp", 00:16:05.028 "traddr": "10.0.0.2", 00:16:05.028 "adrfam": "ipv4", 00:16:05.028 "trsvcid": "4420", 00:16:05.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:05.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:05.028 "hdgst": false, 00:16:05.028 "ddgst": false 00:16:05.028 }, 00:16:05.028 "method": "bdev_nvme_attach_controller" 00:16:05.028 }' 00:16:05.028 [2024-12-07 08:07:16.060053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.060081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.028 [2024-12-07 08:07:16.068036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.068061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.028 [2024-12-07 08:07:16.080059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.080081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.028 [2024-12-07 08:07:16.092059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.092080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 [2024-12-07 08:07:16.096314] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.028 [2024-12-07 08:07:16.096399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86385 ] 00:16:05.028 [2024-12-07 08:07:16.104060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.104080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.028 [2024-12-07 08:07:16.116080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.116104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.028 [2024-12-07 08:07:16.128076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.128097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.028 [2024-12-07 08:07:16.140065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.140087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.028 [2024-12-07 08:07:16.152064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.152084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.028 [2024-12-07 08:07:16.164065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.164086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.028 [2024-12-07 08:07:16.176070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.176091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.028 [2024-12-07 08:07:16.188074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.188095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.028 [2024-12-07 08:07:16.200077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.200096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.028 [2024-12-07 08:07:16.212097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.212117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.028 [2024-12-07 08:07:16.224098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.224117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.028 [2024-12-07 08:07:16.236101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.236120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 [2024-12-07 08:07:16.238195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.028 [2024-12-07 08:07:16.248114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.248138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.028 [2024-12-07 08:07:16.260108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.260128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.028 [2024-12-07 08:07:16.272118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.272142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.028 [2024-12-07 08:07:16.284121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.284144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.028 [2024-12-07 08:07:16.296121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.028 [2024-12-07 08:07:16.296141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.028 [2024-12-07 08:07:16.297299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.028 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.287 [2024-12-07 08:07:16.308122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.287 [2024-12-07 08:07:16.308142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.287 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.287 [2024-12-07 08:07:16.320140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.287 [2024-12-07 08:07:16.320165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.287 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.287 [2024-12-07 08:07:16.332148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.287 [2024-12-07 08:07:16.332173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.287 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.287 [2024-12-07 08:07:16.344160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.287 [2024-12-07 08:07:16.344189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.287 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.287 [2024-12-07 08:07:16.356159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.287 [2024-12-07 08:07:16.356188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.287 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.287 [2024-12-07 08:07:16.368163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.287 [2024-12-07 08:07:16.368191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.287 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.287 [2024-12-07 08:07:16.380175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.287 [2024-12-07 08:07:16.380228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.287 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.287 [2024-12-07 08:07:16.392172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.287 [2024-12-07 08:07:16.392226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.287 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.287 [2024-12-07 08:07:16.404181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.287 [2024-12-07 08:07:16.404235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.287 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.287 [2024-12-07 08:07:16.416186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.287 [2024-12-07 08:07:16.416237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.287 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.287 [2024-12-07 08:07:16.428195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.287 [2024-12-07 08:07:16.428244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.287 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.287 [2024-12-07 08:07:16.440227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.287 [2024-12-07 08:07:16.440266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.287 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.287 [2024-12-07 08:07:16.452256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.287 [2024-12-07 08:07:16.452286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.287 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.287 [2024-12-07 08:07:16.464254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.287 [2024-12-07 08:07:16.464299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.287 Running I/O for 5 seconds... 00:16:05.287 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.287 [2024-12-07 08:07:16.480804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.287 [2024-12-07 08:07:16.480852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.287 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.287 [2024-12-07 08:07:16.498153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.287 [2024-12-07 08:07:16.498224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.287 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.287 [2024-12-07 08:07:16.513027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.287 [2024-12-07 08:07:16.513075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.287 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.287 [2024-12-07 08:07:16.527794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.287 [2024-12-07 08:07:16.527841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.287 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.287 [2024-12-07 08:07:16.543368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.287 [2024-12-07 08:07:16.543414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.287 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.545 [2024-12-07 08:07:16.561974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.545 [2024-12-07 08:07:16.562022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.545 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.545 [2024-12-07 08:07:16.575987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.545 [2024-12-07 08:07:16.576035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.545 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.545 [2024-12-07 08:07:16.592715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.545 [2024-12-07 08:07:16.592761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.545 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.545 [2024-12-07 08:07:16.606707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.545 [2024-12-07 08:07:16.606754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.545 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.545 [2024-12-07 08:07:16.622658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.545 [2024-12-07 08:07:16.622706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.545 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.545 [2024-12-07 08:07:16.639366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.545 [2024-12-07 08:07:16.639400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.545 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.545 [2024-12-07 08:07:16.654905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.545 [2024-12-07 08:07:16.654952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.545 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.545 [2024-12-07 08:07:16.667405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.545 [2024-12-07 08:07:16.667438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.545 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.545 [2024-12-07 08:07:16.683199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.545 [2024-12-07 08:07:16.683268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.545 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.545 [2024-12-07 08:07:16.699292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.545 [2024-12-07 08:07:16.699324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.545 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.545 [2024-12-07 08:07:16.715708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.545 [2024-12-07 08:07:16.715756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.545 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.545 [2024-12-07 08:07:16.732222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.545 [2024-12-07 08:07:16.732268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.545 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.545 [2024-12-07 08:07:16.749179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.545 [2024-12-07 08:07:16.749236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.545 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.546 [2024-12-07 08:07:16.764492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.546 [2024-12-07 08:07:16.764541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.546 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.546 [2024-12-07 08:07:16.781154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.546 [2024-12-07 08:07:16.781225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.546 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.546 [2024-12-07 08:07:16.798292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.546 [2024-12-07 08:07:16.798351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.546 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.546 [2024-12-07 08:07:16.813356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.546 [2024-12-07 08:07:16.813391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.546 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.804 [2024-12-07 08:07:16.822263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.804 [2024-12-07 08:07:16.822323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.804 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.804 [2024-12-07 08:07:16.837150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.804 [2024-12-07 08:07:16.837224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.804 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.804 [2024-12-07 08:07:16.853823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.804 [2024-12-07 08:07:16.853869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.804 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.804 [2024-12-07 08:07:16.869654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.804 [2024-12-07 08:07:16.869701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.804 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.804 [2024-12-07 08:07:16.887327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.804 [2024-12-07 08:07:16.887373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.804 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.804 [2024-12-07 08:07:16.902637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.804 [2024-12-07 08:07:16.902716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.804 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.804 [2024-12-07 08:07:16.917716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.804 [2024-12-07 08:07:16.917762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.804 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.804 [2024-12-07 08:07:16.928899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.804 [2024-12-07 08:07:16.928946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.804 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.804 [2024-12-07 08:07:16.945837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.804 [2024-12-07 08:07:16.945883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.804 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.804 [2024-12-07 08:07:16.961888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.804 [2024-12-07 08:07:16.961933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.804 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.804 [2024-12-07 08:07:16.979043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.804 [2024-12-07 08:07:16.979090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.804 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.804 [2024-12-07 08:07:16.994985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.805 [2024-12-07 08:07:16.995031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.805 2024/12/07 08:07:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.805 [2024-12-07 08:07:17.012435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.805 [2024-12-07 08:07:17.012482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.805 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.805 [2024-12-07 08:07:17.029756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.805 [2024-12-07 08:07:17.029803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.805 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.805 [2024-12-07 08:07:17.044645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.805 [2024-12-07 08:07:17.044692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.805 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:05.805 [2024-12-07 08:07:17.061419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:05.805 [2024-12-07 08:07:17.061452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:05.805 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.064 [2024-12-07 08:07:17.078578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.064 [2024-12-07 08:07:17.078626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.064 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.064 [2024-12-07 08:07:17.095458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.064 [2024-12-07 08:07:17.095505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.064 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.064 [2024-12-07 08:07:17.111499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.064 [2024-12-07 08:07:17.111546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.064 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.064 [2024-12-07 08:07:17.129103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.064 [2024-12-07 08:07:17.129151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.064 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.064 [2024-12-07 08:07:17.144087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.064 [2024-12-07 08:07:17.144134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.064 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.064 [2024-12-07 08:07:17.156186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.064 [2024-12-07 08:07:17.156257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.064 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.064 [2024-12-07 08:07:17.172225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.064 [2024-12-07 08:07:17.172272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.064 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.064 [2024-12-07 08:07:17.188987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.064 [2024-12-07 08:07:17.189034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.064 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.064 [2024-12-07 08:07:17.205647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.064 [2024-12-07 08:07:17.205695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.064 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.064 [2024-12-07 08:07:17.222711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.064 [2024-12-07 08:07:17.222757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.064 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.064 [2024-12-07 08:07:17.236326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.064 [2024-12-07 08:07:17.236369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.064 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.064 [2024-12-07 08:07:17.251995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.064 [2024-12-07 08:07:17.252042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.064 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.064 [2024-12-07 08:07:17.268324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.064 [2024-12-07 08:07:17.268371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.064 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.064 [2024-12-07 08:07:17.285862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.064 [2024-12-07 08:07:17.285908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.064 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.064 [2024-12-07 08:07:17.301672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.064 [2024-12-07 08:07:17.301733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.064 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.064 [2024-12-07 08:07:17.318479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.064 [2024-12-07 08:07:17.318525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.064 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.064 [2024-12-07 08:07:17.335770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.064 [2024-12-07 08:07:17.335819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.329 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.329 [2024-12-07 08:07:17.349992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.329 [2024-12-07 08:07:17.350039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.329 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.329 [2024-12-07 08:07:17.365935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.329 [2024-12-07 08:07:17.365981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.329 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.329 [2024-12-07 08:07:17.382364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.329 [2024-12-07 08:07:17.382409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.329 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.329 [2024-12-07 08:07:17.399719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.329 [2024-12-07 08:07:17.399766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.329 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.329 [2024-12-07 08:07:17.415995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.329 [2024-12-07 08:07:17.416042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.329 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.329 [2024-12-07 08:07:17.433068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.329 [2024-12-07 08:07:17.433115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.329 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.329 [2024-12-07 08:07:17.448226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.329 [2024-12-07 08:07:17.448273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.329 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.329 [2024-12-07 08:07:17.459535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.329 [2024-12-07 08:07:17.459568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.329 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.329 [2024-12-07 08:07:17.476066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.329 [2024-12-07 08:07:17.476114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.329 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.329 [2024-12-07 08:07:17.491613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.329 [2024-12-07 08:07:17.491664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.329 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.329 [2024-12-07 08:07:17.508660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.329 [2024-12-07 08:07:17.508708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.329 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.329 [2024-12-07 08:07:17.524044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.329 [2024-12-07 08:07:17.524092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.329 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.329 [2024-12-07 08:07:17.534095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.329 [2024-12-07 08:07:17.534143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.329 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.329 [2024-12-07 08:07:17.547996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.329 [2024-12-07 08:07:17.548044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.329 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.329 [2024-12-07 08:07:17.563302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.329 [2024-12-07 08:07:17.563350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.329 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.329 [2024-12-07 08:07:17.581253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.329 [2024-12-07 08:07:17.581327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.329 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.329 [2024-12-07 08:07:17.595804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.329 [2024-12-07 08:07:17.595853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.329 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.624 [2024-12-07 08:07:17.611974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.624 [2024-12-07 08:07:17.612009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.624 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.624 [2024-12-07 08:07:17.628073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.624 [2024-12-07 08:07:17.628119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.624 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.624 [2024-12-07 08:07:17.644846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.624 [2024-12-07 08:07:17.644893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.624 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.624 [2024-12-07 08:07:17.661987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.624 [2024-12-07 08:07:17.662035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.624 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.624 [2024-12-07 08:07:17.679184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.624 [2024-12-07 08:07:17.679241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.624 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.624 [2024-12-07 08:07:17.695039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.624 [2024-12-07 08:07:17.695086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.624 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.624 [2024-12-07 08:07:17.712312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.624 [2024-12-07 08:07:17.712361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.624 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.624 [2024-12-07 08:07:17.728763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.624 [2024-12-07 08:07:17.728812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.624 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.624 [2024-12-07 08:07:17.744804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.624 [2024-12-07 08:07:17.744852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.624 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.624 [2024-12-07 08:07:17.763506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.624 [2024-12-07 08:07:17.763556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.624 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.624 [2024-12-07 08:07:17.777937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.624 [2024-12-07 08:07:17.777983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.624 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.624 [2024-12-07 08:07:17.789033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.624 [2024-12-07 08:07:17.789080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.624 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.624 [2024-12-07 08:07:17.805676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.624 [2024-12-07 08:07:17.805723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.624 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.624 [2024-12-07 08:07:17.821805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.624 [2024-12-07 08:07:17.821851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.624 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.624 [2024-12-07 08:07:17.839052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.624 [2024-12-07 08:07:17.839099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.624 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.624 [2024-12-07 08:07:17.855740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.624 [2024-12-07 08:07:17.855787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.624 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.624 [2024-12-07 08:07:17.872038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.624 [2024-12-07 08:07:17.872085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.624 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.624 [2024-12-07 08:07:17.887786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.624 [2024-12-07 08:07:17.887835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.624 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.920 [2024-12-07 08:07:17.906121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.920 [2024-12-07 08:07:17.906172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.920 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.920 [2024-12-07 08:07:17.920535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.920 [2024-12-07 08:07:17.920598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.920 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.920 [2024-12-07 08:07:17.931862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.920 [2024-12-07 08:07:17.931908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.920 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.920 [2024-12-07 08:07:17.948931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.920 [2024-12-07 08:07:17.948978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.920 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.920 [2024-12-07 08:07:17.964697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.920 [2024-12-07 08:07:17.964746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.920 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.920 [2024-12-07 08:07:17.981652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.920 [2024-12-07 08:07:17.981700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.920 2024/12/07 08:07:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.920 [2024-12-07 08:07:17.998288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.920 [2024-12-07 08:07:17.998335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.920 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.920 [2024-12-07 08:07:18.014122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.920 [2024-12-07 08:07:18.014169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.920 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.920 [2024-12-07 08:07:18.031877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.920 [2024-12-07 08:07:18.031925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.920 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.920 [2024-12-07 08:07:18.047647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.920 [2024-12-07 08:07:18.047693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.920 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.920 [2024-12-07 08:07:18.063852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.920 [2024-12-07 08:07:18.063899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.920 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.920 [2024-12-07 08:07:18.080513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.920 [2024-12-07 08:07:18.080560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.920 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.920 [2024-12-07 08:07:18.098020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.920 [2024-12-07 08:07:18.098068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.920 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.920 [2024-12-07 08:07:18.113137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.920 [2024-12-07 08:07:18.113183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.920 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.920 [2024-12-07 08:07:18.124304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.920 [2024-12-07 08:07:18.124336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.920 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.920 [2024-12-07 08:07:18.141058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.920 [2024-12-07 08:07:18.141105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.920 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.920 [2024-12-07 08:07:18.155132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.920 [2024-12-07 08:07:18.155179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.920 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:06.920 [2024-12-07 08:07:18.170904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:06.920 [2024-12-07 08:07:18.170951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:06.920 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.195 [2024-12-07 08:07:18.187080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.195 [2024-12-07 08:07:18.187130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.195 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.195 [2024-12-07 08:07:18.204820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.195 [2024-12-07 08:07:18.204869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.195 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.195 [2024-12-07 08:07:18.220005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.195 [2024-12-07 08:07:18.220051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.195 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.195 [2024-12-07 08:07:18.232183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.195 [2024-12-07 08:07:18.232254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.195 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.195 [2024-12-07 08:07:18.248046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.195 [2024-12-07 08:07:18.248093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.195 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.195 [2024-12-07 08:07:18.264541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.195 [2024-12-07 08:07:18.264587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.195 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.195 [2024-12-07 08:07:18.281241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.195 [2024-12-07 08:07:18.281311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.195 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.195 [2024-12-07 08:07:18.298753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.195 [2024-12-07 08:07:18.298800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.195 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.196 [2024-12-07 08:07:18.313758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.196 [2024-12-07 08:07:18.313805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.196 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.196 [2024-12-07 08:07:18.330608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.196 [2024-12-07 08:07:18.330655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.196 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.196 [2024-12-07 08:07:18.346646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.196 [2024-12-07 08:07:18.346693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.196 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.196 [2024-12-07 08:07:18.364887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.196 [2024-12-07 08:07:18.364934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.196 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.196 [2024-12-07 08:07:18.378851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.196 [2024-12-07 08:07:18.378898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.196 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.196 [2024-12-07 08:07:18.395948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.196 [2024-12-07 08:07:18.395995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.196 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.196 [2024-12-07 08:07:18.409467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.196 [2024-12-07 08:07:18.409501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.196 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.196 [2024-12-07 08:07:18.424618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.196 [2024-12-07 08:07:18.424665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.196 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.196 [2024-12-07 08:07:18.442730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.196 [2024-12-07 08:07:18.442777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.196 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.196 [2024-12-07 08:07:18.456780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.196 [2024-12-07 08:07:18.456827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.196 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.457 [2024-12-07 08:07:18.472827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.457 [2024-12-07 08:07:18.472860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.457 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.457 [2024-12-07 08:07:18.490216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.457 [2024-12-07 08:07:18.490276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.457 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.457 [2024-12-07 08:07:18.505284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.457 [2024-12-07 08:07:18.505333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.457 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.457 [2024-12-07 08:07:18.520978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.457 [2024-12-07 08:07:18.521025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.457 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.457 [2024-12-07 08:07:18.538517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.457 [2024-12-07 08:07:18.538564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.457 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.457 [2024-12-07 08:07:18.553899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.457 [2024-12-07 08:07:18.553946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.457 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.457 [2024-12-07 08:07:18.565223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.457 [2024-12-07 08:07:18.565255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.457 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.458 [2024-12-07 08:07:18.582096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.458 [2024-12-07 08:07:18.582143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.458 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.458 [2024-12-07 08:07:18.596543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.458 [2024-12-07 08:07:18.596591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.458 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.458 [2024-12-07 08:07:18.612902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.458 [2024-12-07 08:07:18.612949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.458 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.458 [2024-12-07 08:07:18.630183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.458 [2024-12-07 08:07:18.630259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.458 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.458 [2024-12-07 08:07:18.645080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.458 [2024-12-07 08:07:18.645130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.458 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.458 [2024-12-07 08:07:18.662625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.458 [2024-12-07 08:07:18.662689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.458 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.458 [2024-12-07 08:07:18.677052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.458 [2024-12-07 08:07:18.677101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.458 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.458 [2024-12-07 08:07:18.692437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.458 [2024-12-07 08:07:18.692482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.458 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.458 [2024-12-07 08:07:18.704193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.458 [2024-12-07 08:07:18.704272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.458 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.458 [2024-12-07 08:07:18.720562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.458 [2024-12-07 08:07:18.720610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.458 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.716 [2024-12-07 08:07:18.736625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.716 [2024-12-07 08:07:18.736691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.717 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.717 [2024-12-07 08:07:18.752331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.717 [2024-12-07 08:07:18.752366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.717 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.717 [2024-12-07 08:07:18.762418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.717 [2024-12-07 08:07:18.762468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.717 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.717 [2024-12-07 08:07:18.777062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.717 [2024-12-07 08:07:18.777111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.717 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.717 [2024-12-07 08:07:18.787130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.717 [2024-12-07 08:07:18.787177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.717 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.717 [2024-12-07 08:07:18.801079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.717 [2024-12-07 08:07:18.801127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.717 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.717 [2024-12-07 08:07:18.817267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.717 [2024-12-07 08:07:18.817332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.717 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.717 [2024-12-07 08:07:18.833973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.717 [2024-12-07 08:07:18.834021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.717 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.717 [2024-12-07 08:07:18.849966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.717 [2024-12-07 08:07:18.850013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.717 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.717 [2024-12-07 08:07:18.867214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.717 [2024-12-07 08:07:18.867260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.717 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.717 [2024-12-07 08:07:18.883594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.717 [2024-12-07 08:07:18.883641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.717 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.717 [2024-12-07 08:07:18.900233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.717 [2024-12-07 08:07:18.900262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.717 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.717 [2024-12-07 08:07:18.916485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.717 [2024-12-07 08:07:18.916533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.717 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.717 [2024-12-07 08:07:18.932044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.717 [2024-12-07 08:07:18.932091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.717 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.717 [2024-12-07 08:07:18.941878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.717 [2024-12-07 08:07:18.941907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.717 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.717 [2024-12-07 08:07:18.963669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.717 [2024-12-07 08:07:18.963717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.717 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.717 [2024-12-07 08:07:18.977523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.717 [2024-12-07 08:07:18.977573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.717 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.976 [2024-12-07 08:07:18.993022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.976 [2024-12-07 08:07:18.993071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.976 2024/12/07 08:07:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.976 [2024-12-07 08:07:19.010016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.976 [2024-12-07 08:07:19.010063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.976 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.976 [2024-12-07 08:07:19.025985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.976 [2024-12-07 08:07:19.026032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.976 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.976 [2024-12-07 08:07:19.042906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.976 [2024-12-07 08:07:19.042954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.976 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.976 [2024-12-07 08:07:19.059743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.976 [2024-12-07 08:07:19.059790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.976 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.976 [2024-12-07 08:07:19.076459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.976 [2024-12-07 08:07:19.076506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.976 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.976 [2024-12-07 08:07:19.093130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.976 [2024-12-07 08:07:19.093179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.976 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.976 [2024-12-07 08:07:19.108874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.976 [2024-12-07 08:07:19.108922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.977 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.977 [2024-12-07 08:07:19.120434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.977 [2024-12-07 08:07:19.120467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.977 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.977 [2024-12-07 08:07:19.136969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.977 [2024-12-07 08:07:19.137016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.977 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.977 [2024-12-07 08:07:19.153157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.977 [2024-12-07 08:07:19.153229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.977 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.977 [2024-12-07 08:07:19.170859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.977 [2024-12-07 08:07:19.170908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.977 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.977 [2024-12-07 08:07:19.184816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.977 [2024-12-07 08:07:19.184862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.977 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.977 [2024-12-07 08:07:19.201140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.977 [2024-12-07 08:07:19.201187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.977 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.977 [2024-12-07 08:07:19.218277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.977 [2024-12-07 08:07:19.218324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.977 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:07.977 [2024-12-07 08:07:19.233474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:07.977 [2024-12-07 08:07:19.233524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:07.977 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.236 [2024-12-07 08:07:19.250845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.236 [2024-12-07 08:07:19.250895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.236 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.236 [2024-12-07 08:07:19.265963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.236 [2024-12-07 08:07:19.266010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.236 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.236 [2024-12-07 08:07:19.282053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.236 [2024-12-07 08:07:19.282101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.236 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.236 [2024-12-07 08:07:19.298724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.236 [2024-12-07 08:07:19.298771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.236 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.236 [2024-12-07 08:07:19.314018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.236 [2024-12-07 08:07:19.314064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.236 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.236 [2024-12-07 08:07:19.325458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.236 [2024-12-07 08:07:19.325507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.236 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.236 [2024-12-07 08:07:19.341368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.236 [2024-12-07 08:07:19.341418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.236 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.236 [2024-12-07 08:07:19.358380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.236 [2024-12-07 08:07:19.358427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.236 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.236 [2024-12-07 08:07:19.376278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.236 [2024-12-07 08:07:19.376327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.236 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.236 [2024-12-07 08:07:19.391135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.236 [2024-12-07 08:07:19.391182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.236 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.236 [2024-12-07 08:07:19.405844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.236 [2024-12-07 08:07:19.405892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.236 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.236 [2024-12-07 08:07:19.421978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.236 [2024-12-07 08:07:19.422026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.236 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.236 [2024-12-07 08:07:19.438024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.236 [2024-12-07 08:07:19.438071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.236 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.236 [2024-12-07 08:07:19.454236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.237 [2024-12-07 08:07:19.454314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.237 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.237 [2024-12-07 08:07:19.471536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.237 [2024-12-07 08:07:19.471585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.237 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.237 [2024-12-07 08:07:19.486804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.237 [2024-12-07 08:07:19.486852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.237 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.237 [2024-12-07 08:07:19.502922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.237 [2024-12-07 08:07:19.502970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.237 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.496 [2024-12-07 08:07:19.519030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.496 [2024-12-07 08:07:19.519078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.496 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.496 [2024-12-07 08:07:19.537349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.496 [2024-12-07 08:07:19.537400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.496 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.496 [2024-12-07 08:07:19.551841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.496 [2024-12-07 08:07:19.551889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.496 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.496 [2024-12-07 08:07:19.568134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.496 [2024-12-07 08:07:19.568181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.496 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.496 [2024-12-07 08:07:19.584135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.496 [2024-12-07 08:07:19.584183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.496 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.496 [2024-12-07 08:07:19.602371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.496 [2024-12-07 08:07:19.602419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.496 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.496 [2024-12-07 08:07:19.616450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.496 [2024-12-07 08:07:19.616499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.496 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.496 [2024-12-07 08:07:19.631813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.496 [2024-12-07 08:07:19.631862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.496 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.496 [2024-12-07 08:07:19.643813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.496 [2024-12-07 08:07:19.643862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.496 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.496 [2024-12-07 08:07:19.660640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.496 [2024-12-07 08:07:19.660688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.496 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.496 [2024-12-07 08:07:19.676845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.496 [2024-12-07 08:07:19.676892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.496 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.496 [2024-12-07 08:07:19.695063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.496 [2024-12-07 08:07:19.695111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.496 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.496 [2024-12-07 08:07:19.709152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.496 [2024-12-07 08:07:19.709224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.496 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.496 [2024-12-07 08:07:19.725036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.496 [2024-12-07 08:07:19.725084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.496 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.496 [2024-12-07 08:07:19.741167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.496 [2024-12-07 08:07:19.741227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.496 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.496 [2024-12-07 08:07:19.759077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.496 [2024-12-07 08:07:19.759127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.496 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.755 [2024-12-07 08:07:19.774420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.755 [2024-12-07 08:07:19.774455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.755 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.755 [2024-12-07 08:07:19.793221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.755 [2024-12-07 08:07:19.793308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.755 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.755 [2024-12-07 08:07:19.808146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.755 [2024-12-07 08:07:19.808222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.755 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.755 [2024-12-07 08:07:19.825746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.755 [2024-12-07 08:07:19.825784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.755 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.756 [2024-12-07 08:07:19.841530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.756 [2024-12-07 08:07:19.841567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.756 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.756 [2024-12-07 08:07:19.860549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.756 [2024-12-07 08:07:19.860599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.756 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.756 [2024-12-07 08:07:19.874908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.756 [2024-12-07 08:07:19.874958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.756 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.756 [2024-12-07 08:07:19.890254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.756 [2024-12-07 08:07:19.890305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.756 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.756 [2024-12-07 08:07:19.907500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.756 [2024-12-07 08:07:19.907551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.756 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.756 [2024-12-07 08:07:19.924426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.756 [2024-12-07 08:07:19.924476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.756 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.756 [2024-12-07 08:07:19.941236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.756 [2024-12-07 08:07:19.941338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.756 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.756 [2024-12-07 08:07:19.957004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.756 [2024-12-07 08:07:19.957051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.756 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.756 [2024-12-07 08:07:19.968580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.756 [2024-12-07 08:07:19.968627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.756 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.756 [2024-12-07 08:07:19.984751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.756 [2024-12-07 08:07:19.984799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.756 2024/12/07 08:07:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.756 [2024-12-07 08:07:19.999587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.756 [2024-12-07 08:07:19.999635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.756 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:08.756 [2024-12-07 08:07:20.015720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.756 [2024-12-07 08:07:20.015770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.756 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.016 [2024-12-07 08:07:20.032624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.016 [2024-12-07 08:07:20.032675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.016 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.016 [2024-12-07 08:07:20.049082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.016 [2024-12-07 08:07:20.049131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.016 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.016 [2024-12-07 08:07:20.065314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.016 [2024-12-07 08:07:20.065351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.016 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.016 [2024-12-07 08:07:20.082453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.016 [2024-12-07 08:07:20.082499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.016 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.016 [2024-12-07 08:07:20.097811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.016 [2024-12-07 08:07:20.097860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.016 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.016 [2024-12-07 08:07:20.114736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.016 [2024-12-07 08:07:20.114786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.016 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.016 [2024-12-07 08:07:20.129620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.016 [2024-12-07 08:07:20.129669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.016 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.016 [2024-12-07 08:07:20.145899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.016 [2024-12-07 08:07:20.145947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.016 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.016 [2024-12-07 08:07:20.161760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.016 [2024-12-07 08:07:20.161809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.016 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.016 [2024-12-07 08:07:20.179777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.016 [2024-12-07 08:07:20.179825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.016 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.016 [2024-12-07 08:07:20.194951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.016 [2024-12-07 08:07:20.194998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.016 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.016 [2024-12-07 08:07:20.206045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.016 [2024-12-07 08:07:20.206092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.016 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.016 [2024-12-07 08:07:20.222833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.016 [2024-12-07 08:07:20.222881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.016 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.016 [2024-12-07 08:07:20.238369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.016 [2024-12-07 08:07:20.238416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.016 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.016 [2024-12-07 08:07:20.256021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.016 [2024-12-07 08:07:20.256069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.016 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.016 [2024-12-07 08:07:20.272041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.016 [2024-12-07 08:07:20.272089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.016 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.016 [2024-12-07 08:07:20.290068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.016 [2024-12-07 08:07:20.290116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.276 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.276 [2024-12-07 08:07:20.305069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.276 [2024-12-07 08:07:20.305116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.276 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.276 [2024-12-07 08:07:20.321129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.276 [2024-12-07 08:07:20.321176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.276 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.276 [2024-12-07 08:07:20.337292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.276 [2024-12-07 08:07:20.337356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.276 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.276 [2024-12-07 08:07:20.353708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.276 [2024-12-07 08:07:20.353755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.276 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.276 [2024-12-07 08:07:20.371124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.276 [2024-12-07 08:07:20.371172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.276 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.276 [2024-12-07 08:07:20.386731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.276 [2024-12-07 08:07:20.386778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.276 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.276 [2024-12-07 08:07:20.397963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.276 [2024-12-07 08:07:20.398011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.276 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.276 [2024-12-07 08:07:20.414982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.276 [2024-12-07 08:07:20.415030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.276 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.276 [2024-12-07 08:07:20.430376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.276 [2024-12-07 08:07:20.430423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.276 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.276 [2024-12-07 08:07:20.448223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.276 [2024-12-07 08:07:20.448268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.276 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.276 [2024-12-07 08:07:20.463521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.276 [2024-12-07 08:07:20.463570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.276 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.276 [2024-12-07 08:07:20.475169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.276 [2024-12-07 08:07:20.475229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.276 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.276 [2024-12-07 08:07:20.491426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.276 [2024-12-07 08:07:20.491473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.276 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.276 [2024-12-07 08:07:20.508428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.277 [2024-12-07 08:07:20.508460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.277 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.277 [2024-12-07 08:07:20.523900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.277 [2024-12-07 08:07:20.523948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.277 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.277 [2024-12-07 08:07:20.541035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.277 [2024-12-07 08:07:20.541082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.277 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.558 [2024-12-07 08:07:20.558117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.558 [2024-12-07 08:07:20.558163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.558 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.558 [2024-12-07 08:07:20.573740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.558 [2024-12-07 08:07:20.573789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.558 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.558 [2024-12-07 08:07:20.584719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.558 [2024-12-07 08:07:20.584765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.558 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.558 [2024-12-07 08:07:20.600379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.559 [2024-12-07 08:07:20.600425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.559 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.559 [2024-12-07 08:07:20.616288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.559 [2024-12-07 08:07:20.616334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.559 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.559 [2024-12-07 08:07:20.628288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.559 [2024-12-07 08:07:20.628335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.559 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.559 [2024-12-07 08:07:20.643371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.559 [2024-12-07 08:07:20.643418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.559 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.559 [2024-12-07 08:07:20.654874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.559 [2024-12-07 08:07:20.654921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.559 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.559 [2024-12-07 08:07:20.671565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.559 [2024-12-07 08:07:20.671612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.559 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.559 [2024-12-07 08:07:20.688173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.559 [2024-12-07 08:07:20.688231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.559 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.559 [2024-12-07 08:07:20.703758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.559 [2024-12-07 08:07:20.703805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.559 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.559 [2024-12-07 08:07:20.714975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.559 [2024-12-07 08:07:20.715020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.559 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.559 [2024-12-07 08:07:20.731056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.559 [2024-12-07 08:07:20.731102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.559 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.559 [2024-12-07 08:07:20.747279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.559 [2024-12-07 08:07:20.747326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.559 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.559 [2024-12-07 08:07:20.763188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.559 [2024-12-07 08:07:20.763245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.559 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.559 [2024-12-07 08:07:20.780440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.559 [2024-12-07 08:07:20.780490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.559 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.559 [2024-12-07 08:07:20.800251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.559 [2024-12-07 08:07:20.800293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.559 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.818 [2024-12-07 08:07:20.818711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.818 [2024-12-07 08:07:20.818759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.818 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.818 [2024-12-07 08:07:20.833588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.818 [2024-12-07 08:07:20.833668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.818 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.818 [2024-12-07 08:07:20.849410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.818 [2024-12-07 08:07:20.849459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.818 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.818 [2024-12-07 08:07:20.866644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.818 [2024-12-07 08:07:20.866690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.818 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.818 [2024-12-07 08:07:20.881849] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.818 [2024-12-07 08:07:20.881897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.818 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.818 [2024-12-07 08:07:20.892853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.818 [2024-12-07 08:07:20.892899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.818 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.818 [2024-12-07 08:07:20.909325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.818 [2024-12-07 08:07:20.909360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.818 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.818 [2024-12-07 08:07:20.925789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.818 [2024-12-07 08:07:20.925836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.818 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.818 [2024-12-07 08:07:20.941020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.818 [2024-12-07 08:07:20.941069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.818 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.818 [2024-12-07 08:07:20.956301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.818 [2024-12-07 08:07:20.956348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.818 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.818 [2024-12-07 08:07:20.966834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.818 [2024-12-07 08:07:20.966882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.818 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.818 [2024-12-07 08:07:20.980176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.818 [2024-12-07 08:07:20.980235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.818 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.819 [2024-12-07 08:07:20.995630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.819 [2024-12-07 08:07:20.995678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.819 2024/12/07 08:07:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.819 [2024-12-07 08:07:21.004848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.819 [2024-12-07 08:07:21.004896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.819 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.819 [2024-12-07 08:07:21.021047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.819 [2024-12-07 08:07:21.021099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.819 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.819 [2024-12-07 08:07:21.039252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.819 [2024-12-07 08:07:21.039300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.819 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.819 [2024-12-07 08:07:21.053908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.819 [2024-12-07 08:07:21.053957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.819 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.819 [2024-12-07 08:07:21.069891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.819 [2024-12-07 08:07:21.069942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.819 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:09.819 [2024-12-07 08:07:21.086043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.819 [2024-12-07 08:07:21.086091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.819 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.078 [2024-12-07 08:07:21.103565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.078 [2024-12-07 08:07:21.103611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.078 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.078 [2024-12-07 08:07:21.118975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.078 [2024-12-07 08:07:21.119023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.078 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.078 [2024-12-07 08:07:21.134521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.078 [2024-12-07 08:07:21.134568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.078 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.078 [2024-12-07 08:07:21.151221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.078 [2024-12-07 08:07:21.151282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.078 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.078 [2024-12-07 08:07:21.168001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.078 [2024-12-07 08:07:21.168048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.078 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.078 [2024-12-07 08:07:21.183493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.078 [2024-12-07 08:07:21.183539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.078 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.078 [2024-12-07 08:07:21.201929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.078 [2024-12-07 08:07:21.201976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.078 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.078 [2024-12-07 08:07:21.215355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.078 [2024-12-07 08:07:21.215400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.078 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.078 [2024-12-07 08:07:21.231258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.078 [2024-12-07 08:07:21.231304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.078 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.078 [2024-12-07 08:07:21.247747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.079 [2024-12-07 08:07:21.247794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.079 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.079 [2024-12-07 08:07:21.264134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.079 [2024-12-07 08:07:21.264180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.079 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.079 [2024-12-07 08:07:21.280725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.079 [2024-12-07 08:07:21.280772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.079 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.079 [2024-12-07 08:07:21.297317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.079 [2024-12-07 08:07:21.297371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.079 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.079 [2024-12-07 08:07:21.314423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.079 [2024-12-07 08:07:21.314469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.079 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.079 [2024-12-07 08:07:21.329952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.079 [2024-12-07 08:07:21.329999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.079 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.079 [2024-12-07 08:07:21.347797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.079 [2024-12-07 08:07:21.347860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.079 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.338 [2024-12-07 08:07:21.362396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.338 [2024-12-07 08:07:21.362442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.338 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.338 [2024-12-07 08:07:21.377394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.338 [2024-12-07 08:07:21.377442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.338 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.338 [2024-12-07 08:07:21.392901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.338 [2024-12-07 08:07:21.392948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.338 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.338 [2024-12-07 08:07:21.409569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.338 [2024-12-07 08:07:21.409618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.338 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.338 [2024-12-07 08:07:21.426228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.338 [2024-12-07 08:07:21.426286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.338 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.338 [2024-12-07 08:07:21.443035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.338 [2024-12-07 08:07:21.443083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.338 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.338 [2024-12-07 08:07:21.460506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.338 [2024-12-07 08:07:21.460555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.338 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.338 [2024-12-07 08:07:21.475803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.338 [2024-12-07 08:07:21.475849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.338 00:16:10.338 Latency(us) 00:16:10.338 [2024-12-07T08:07:21.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.338 [2024-12-07T08:07:21.614Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:10.338 Nvme1n1 : 5.01 12816.29 100.13 0.00 0.00 9973.76 4140.68 20971.52 00:16:10.338 [2024-12-07T08:07:21.614Z] =================================================================================================================== 00:16:10.338 [2024-12-07T08:07:21.614Z] Total : 12816.29 100.13 0.00 0.00 9973.76 4140.68 20971.52 00:16:10.338 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.338 [2024-12-07 08:07:21.487748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.338 [2024-12-07 08:07:21.487775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.338 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.338 [2024-12-07 08:07:21.499769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.338 [2024-12-07 08:07:21.499803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.338 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.338 [2024-12-07 08:07:21.511778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.338 [2024-12-07 08:07:21.511813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.338 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.338 [2024-12-07 08:07:21.523777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.338 [2024-12-07 08:07:21.523812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.339 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.339 [2024-12-07 08:07:21.535779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.339 [2024-12-07 08:07:21.535811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.339 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.339 [2024-12-07 08:07:21.547787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.339 [2024-12-07 08:07:21.547823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.339 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.339 [2024-12-07 08:07:21.559790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.339 [2024-12-07 08:07:21.559822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.339 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.339 [2024-12-07 08:07:21.571793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.339 [2024-12-07 08:07:21.571828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.339 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.339 [2024-12-07 08:07:21.583791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.339 [2024-12-07 08:07:21.583823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.339 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.339 [2024-12-07 08:07:21.595791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.339 [2024-12-07 08:07:21.595823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.339 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.339 [2024-12-07 08:07:21.607789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.339 [2024-12-07 08:07:21.607819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.339 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.597 [2024-12-07 08:07:21.619780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.597 [2024-12-07 08:07:21.619806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.597 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.597 [2024-12-07 08:07:21.631809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.597 [2024-12-07 08:07:21.631842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.598 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.598 [2024-12-07 08:07:21.643788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.598 [2024-12-07 08:07:21.643815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.598 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.598 [2024-12-07 08:07:21.655814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.598 [2024-12-07 08:07:21.655845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.598 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.598 [2024-12-07 08:07:21.667792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.598 [2024-12-07 08:07:21.667815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.598 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.598 [2024-12-07 08:07:21.679821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.598 [2024-12-07 08:07:21.679848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.598 2024/12/07 08:07:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:10.598 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (86385) - No such process 00:16:10.598 08:07:21 -- target/zcopy.sh@49 -- # wait 86385 00:16:10.598 08:07:21 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:10.598 08:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.598 08:07:21 -- common/autotest_common.sh@10 -- # set +x 00:16:10.598 08:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.598 08:07:21 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:10.598 08:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.598 08:07:21 -- common/autotest_common.sh@10 -- # set +x 00:16:10.598 delay0 00:16:10.598 08:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.598 08:07:21 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:10.598 08:07:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.598 08:07:21 -- common/autotest_common.sh@10 -- # set +x 00:16:10.598 08:07:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.598 08:07:21 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:10.598 [2024-12-07 08:07:21.867934] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:17.161 Initializing NVMe Controllers 00:16:17.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:17.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:17.161 Initialization complete. Launching workers. 00:16:17.161 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 77 00:16:17.161 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 364, failed to submit 33 00:16:17.161 success 173, unsuccess 191, failed 0 00:16:17.161 08:07:27 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:17.161 08:07:27 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:17.161 08:07:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:17.161 08:07:27 -- nvmf/common.sh@116 -- # sync 00:16:17.161 08:07:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:17.161 08:07:27 -- nvmf/common.sh@119 -- # set +e 00:16:17.161 08:07:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:17.161 08:07:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:17.161 rmmod nvme_tcp 00:16:17.161 rmmod nvme_fabrics 00:16:17.161 rmmod nvme_keyring 00:16:17.161 08:07:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:17.161 08:07:28 -- nvmf/common.sh@123 -- # set -e 00:16:17.161 08:07:28 -- nvmf/common.sh@124 -- # return 0 00:16:17.161 08:07:28 -- nvmf/common.sh@477 -- # '[' -n 86215 ']' 00:16:17.161 08:07:28 -- nvmf/common.sh@478 -- # killprocess 86215 00:16:17.161 08:07:28 -- common/autotest_common.sh@936 -- # '[' -z 86215 ']' 00:16:17.161 08:07:28 -- common/autotest_common.sh@940 -- # kill -0 86215 00:16:17.161 08:07:28 -- common/autotest_common.sh@941 -- # uname 00:16:17.161 08:07:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:17.161 08:07:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86215 00:16:17.161 08:07:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:17.161 08:07:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:17.161 killing process with pid 86215 00:16:17.161 08:07:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86215' 00:16:17.161 08:07:28 -- common/autotest_common.sh@955 -- # kill 86215 00:16:17.161 08:07:28 -- common/autotest_common.sh@960 -- # wait 86215 00:16:17.161 08:07:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:17.161 08:07:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:17.161 08:07:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:17.161 08:07:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:17.161 08:07:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:17.161 08:07:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.161 08:07:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.161 08:07:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.161 08:07:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:17.161 00:16:17.161 real 0m24.676s 00:16:17.161 user 0m39.756s 00:16:17.161 sys 0m6.621s 00:16:17.161 08:07:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:17.161 08:07:28 -- common/autotest_common.sh@10 -- # set +x 00:16:17.161 ************************************ 00:16:17.161 END TEST nvmf_zcopy 00:16:17.161 ************************************ 00:16:17.161 08:07:28 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:17.161 08:07:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:17.161 08:07:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:17.161 08:07:28 -- common/autotest_common.sh@10 -- # set +x 00:16:17.161 ************************************ 00:16:17.161 START TEST nvmf_nmic 00:16:17.161 ************************************ 00:16:17.161 08:07:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:17.161 * Looking for test storage... 00:16:17.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:17.161 08:07:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:17.421 08:07:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:17.421 08:07:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:17.421 08:07:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:17.421 08:07:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:17.421 08:07:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:17.421 08:07:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:17.421 08:07:28 -- scripts/common.sh@335 -- # IFS=.-: 00:16:17.421 08:07:28 -- scripts/common.sh@335 -- # read -ra ver1 00:16:17.421 08:07:28 -- scripts/common.sh@336 -- # IFS=.-: 00:16:17.421 08:07:28 -- scripts/common.sh@336 -- # read -ra ver2 00:16:17.421 08:07:28 -- scripts/common.sh@337 -- # local 'op=<' 00:16:17.421 08:07:28 -- scripts/common.sh@339 -- # ver1_l=2 00:16:17.422 08:07:28 -- scripts/common.sh@340 -- # ver2_l=1 00:16:17.422 08:07:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:17.422 08:07:28 -- scripts/common.sh@343 -- # case "$op" in 00:16:17.422 08:07:28 -- scripts/common.sh@344 -- # : 1 00:16:17.422 08:07:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:17.422 08:07:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:17.422 08:07:28 -- scripts/common.sh@364 -- # decimal 1 00:16:17.422 08:07:28 -- scripts/common.sh@352 -- # local d=1 00:16:17.422 08:07:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:17.422 08:07:28 -- scripts/common.sh@354 -- # echo 1 00:16:17.422 08:07:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:17.422 08:07:28 -- scripts/common.sh@365 -- # decimal 2 00:16:17.422 08:07:28 -- scripts/common.sh@352 -- # local d=2 00:16:17.422 08:07:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:17.422 08:07:28 -- scripts/common.sh@354 -- # echo 2 00:16:17.422 08:07:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:17.422 08:07:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:17.422 08:07:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:17.422 08:07:28 -- scripts/common.sh@367 -- # return 0 00:16:17.422 08:07:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:17.422 08:07:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:17.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.422 --rc genhtml_branch_coverage=1 00:16:17.422 --rc genhtml_function_coverage=1 00:16:17.422 --rc genhtml_legend=1 00:16:17.422 --rc geninfo_all_blocks=1 00:16:17.422 --rc geninfo_unexecuted_blocks=1 00:16:17.422 00:16:17.422 ' 00:16:17.422 08:07:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:17.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.422 --rc genhtml_branch_coverage=1 00:16:17.422 --rc genhtml_function_coverage=1 00:16:17.422 --rc genhtml_legend=1 00:16:17.422 --rc geninfo_all_blocks=1 00:16:17.422 --rc geninfo_unexecuted_blocks=1 00:16:17.422 00:16:17.422 ' 00:16:17.422 08:07:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:17.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.422 --rc genhtml_branch_coverage=1 00:16:17.422 --rc genhtml_function_coverage=1 00:16:17.422 --rc genhtml_legend=1 00:16:17.422 --rc geninfo_all_blocks=1 00:16:17.422 --rc geninfo_unexecuted_blocks=1 00:16:17.422 00:16:17.422 ' 00:16:17.422 08:07:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:17.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.422 --rc genhtml_branch_coverage=1 00:16:17.422 --rc genhtml_function_coverage=1 00:16:17.422 --rc genhtml_legend=1 00:16:17.422 --rc geninfo_all_blocks=1 00:16:17.422 --rc geninfo_unexecuted_blocks=1 00:16:17.422 00:16:17.422 ' 00:16:17.422 08:07:28 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:17.422 08:07:28 -- nvmf/common.sh@7 -- # uname -s 00:16:17.422 08:07:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.422 08:07:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.422 08:07:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.422 08:07:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.422 08:07:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.422 08:07:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.422 08:07:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.422 08:07:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.422 08:07:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.422 08:07:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.422 08:07:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:16:17.422 08:07:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:16:17.422 08:07:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.422 08:07:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.422 08:07:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:17.422 08:07:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:17.422 08:07:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.422 08:07:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.422 08:07:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.422 08:07:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.422 08:07:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.422 08:07:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.422 08:07:28 -- paths/export.sh@5 -- # export PATH 00:16:17.422 08:07:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.422 08:07:28 -- nvmf/common.sh@46 -- # : 0 00:16:17.422 08:07:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:17.422 08:07:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:17.422 08:07:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:17.422 08:07:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.422 08:07:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.422 08:07:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:17.422 08:07:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:17.422 08:07:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:17.422 08:07:28 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:17.422 08:07:28 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:17.422 08:07:28 -- target/nmic.sh@14 -- # nvmftestinit 00:16:17.422 08:07:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:17.422 08:07:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.422 08:07:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:17.422 08:07:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:17.422 08:07:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:17.422 08:07:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.422 08:07:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.422 08:07:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.422 08:07:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:17.422 08:07:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:17.422 08:07:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:17.422 08:07:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:17.422 08:07:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:17.422 08:07:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:17.422 08:07:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.422 08:07:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:17.422 08:07:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:17.422 08:07:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:17.422 08:07:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:17.422 08:07:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:17.422 08:07:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:17.422 08:07:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.422 08:07:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:17.422 08:07:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:17.422 08:07:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:17.422 08:07:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:17.422 08:07:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:17.422 08:07:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:17.422 Cannot find device "nvmf_tgt_br" 00:16:17.422 08:07:28 -- nvmf/common.sh@154 -- # true 00:16:17.422 08:07:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:17.422 Cannot find device "nvmf_tgt_br2" 00:16:17.422 08:07:28 -- nvmf/common.sh@155 -- # true 00:16:17.422 08:07:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:17.422 08:07:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:17.422 Cannot find device "nvmf_tgt_br" 00:16:17.422 08:07:28 -- nvmf/common.sh@157 -- # true 00:16:17.422 08:07:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:17.422 Cannot find device "nvmf_tgt_br2" 00:16:17.422 08:07:28 -- nvmf/common.sh@158 -- # true 00:16:17.422 08:07:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:17.422 08:07:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:17.681 08:07:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:17.681 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:17.681 08:07:28 -- nvmf/common.sh@161 -- # true 00:16:17.681 08:07:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:17.681 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:17.681 08:07:28 -- nvmf/common.sh@162 -- # true 00:16:17.681 08:07:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:17.681 08:07:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:17.681 08:07:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:17.681 08:07:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:17.681 08:07:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:17.681 08:07:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:17.681 08:07:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:17.681 08:07:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:17.681 08:07:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:17.681 08:07:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:17.681 08:07:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:17.681 08:07:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:17.681 08:07:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:17.681 08:07:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:17.681 08:07:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:17.681 08:07:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:17.681 08:07:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:17.681 08:07:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:17.681 08:07:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:17.681 08:07:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:17.681 08:07:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:17.682 08:07:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:17.682 08:07:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:17.682 08:07:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:17.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:16:17.682 00:16:17.682 --- 10.0.0.2 ping statistics --- 00:16:17.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.682 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:16:17.682 08:07:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:17.682 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:17.682 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:16:17.682 00:16:17.682 --- 10.0.0.3 ping statistics --- 00:16:17.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.682 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:17.682 08:07:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:17.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:17.682 00:16:17.682 --- 10.0.0.1 ping statistics --- 00:16:17.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.682 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:17.682 08:07:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.682 08:07:28 -- nvmf/common.sh@421 -- # return 0 00:16:17.682 08:07:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:17.682 08:07:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.682 08:07:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:17.682 08:07:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:17.682 08:07:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.682 08:07:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:17.682 08:07:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:17.682 08:07:28 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:17.682 08:07:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:17.682 08:07:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:17.682 08:07:28 -- common/autotest_common.sh@10 -- # set +x 00:16:17.682 08:07:28 -- nvmf/common.sh@469 -- # nvmfpid=86708 00:16:17.682 08:07:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:17.682 08:07:28 -- nvmf/common.sh@470 -- # waitforlisten 86708 00:16:17.682 08:07:28 -- common/autotest_common.sh@829 -- # '[' -z 86708 ']' 00:16:17.682 08:07:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.682 08:07:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:17.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.682 08:07:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.682 08:07:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:17.682 08:07:28 -- common/autotest_common.sh@10 -- # set +x 00:16:17.682 [2024-12-07 08:07:28.946195] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:17.682 [2024-12-07 08:07:28.946326] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.940 [2024-12-07 08:07:29.083268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:17.940 [2024-12-07 08:07:29.157975] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:17.940 [2024-12-07 08:07:29.158149] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.940 [2024-12-07 08:07:29.158162] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.940 [2024-12-07 08:07:29.158172] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.940 [2024-12-07 08:07:29.158337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.940 [2024-12-07 08:07:29.158497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.940 [2024-12-07 08:07:29.159117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.940 [2024-12-07 08:07:29.159152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.876 08:07:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:18.876 08:07:29 -- common/autotest_common.sh@862 -- # return 0 00:16:18.876 08:07:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:18.876 08:07:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:18.876 08:07:29 -- common/autotest_common.sh@10 -- # set +x 00:16:18.876 08:07:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.876 08:07:30 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:18.876 08:07:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.876 08:07:30 -- common/autotest_common.sh@10 -- # set +x 00:16:18.876 [2024-12-07 08:07:30.037527] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.876 08:07:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.876 08:07:30 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:18.876 08:07:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.876 08:07:30 -- common/autotest_common.sh@10 -- # set +x 00:16:18.876 Malloc0 00:16:18.876 08:07:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.876 08:07:30 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:18.876 08:07:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.876 08:07:30 -- common/autotest_common.sh@10 -- # set +x 00:16:18.876 08:07:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.876 08:07:30 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:18.876 08:07:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.876 08:07:30 -- common/autotest_common.sh@10 -- # set +x 00:16:18.876 08:07:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.876 08:07:30 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:18.876 08:07:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.876 08:07:30 -- common/autotest_common.sh@10 -- # set +x 00:16:18.876 [2024-12-07 08:07:30.119946] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.876 08:07:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.876 08:07:30 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:18.876 test case1: single bdev can't be used in multiple subsystems 00:16:18.876 08:07:30 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:18.876 08:07:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.876 08:07:30 -- common/autotest_common.sh@10 -- # set +x 00:16:18.876 08:07:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.876 08:07:30 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:18.876 08:07:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.876 08:07:30 -- common/autotest_common.sh@10 -- # set +x 00:16:18.876 08:07:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.876 08:07:30 -- target/nmic.sh@28 -- # nmic_status=0 00:16:18.876 08:07:30 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:18.876 08:07:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.876 08:07:30 -- common/autotest_common.sh@10 -- # set +x 00:16:18.876 [2024-12-07 08:07:30.143757] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:18.876 [2024-12-07 08:07:30.143792] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:18.876 [2024-12-07 08:07:30.143804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:18.876 2024/12/07 08:07:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:18.876 request: 00:16:18.876 { 00:16:18.877 "method": "nvmf_subsystem_add_ns", 00:16:18.877 "params": { 00:16:18.877 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:18.877 "namespace": { 00:16:18.877 "bdev_name": "Malloc0" 00:16:18.877 } 00:16:18.877 } 00:16:18.877 } 00:16:18.877 Got JSON-RPC error response 00:16:18.877 GoRPCClient: error on JSON-RPC call 00:16:19.135 08:07:30 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:19.135 08:07:30 -- target/nmic.sh@29 -- # nmic_status=1 00:16:19.135 08:07:30 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:19.135 08:07:30 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:19.135 Adding namespace failed - expected result. 00:16:19.135 test case2: host connect to nvmf target in multiple paths 00:16:19.135 08:07:30 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:19.135 08:07:30 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:19.135 08:07:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.135 08:07:30 -- common/autotest_common.sh@10 -- # set +x 00:16:19.135 [2024-12-07 08:07:30.155884] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:19.135 08:07:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.135 08:07:30 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:19.135 08:07:30 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:19.394 08:07:30 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:19.394 08:07:30 -- common/autotest_common.sh@1187 -- # local i=0 00:16:19.394 08:07:30 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:19.394 08:07:30 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:16:19.394 08:07:30 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:21.297 08:07:32 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:21.297 08:07:32 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:21.297 08:07:32 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:21.297 08:07:32 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:16:21.297 08:07:32 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:21.297 08:07:32 -- common/autotest_common.sh@1197 -- # return 0 00:16:21.297 08:07:32 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:21.297 [global] 00:16:21.297 thread=1 00:16:21.297 invalidate=1 00:16:21.297 rw=write 00:16:21.297 time_based=1 00:16:21.297 runtime=1 00:16:21.297 ioengine=libaio 00:16:21.297 direct=1 00:16:21.297 bs=4096 00:16:21.297 iodepth=1 00:16:21.297 norandommap=0 00:16:21.297 numjobs=1 00:16:21.297 00:16:21.297 verify_dump=1 00:16:21.297 verify_backlog=512 00:16:21.297 verify_state_save=0 00:16:21.297 do_verify=1 00:16:21.297 verify=crc32c-intel 00:16:21.297 [job0] 00:16:21.297 filename=/dev/nvme0n1 00:16:21.297 Could not set queue depth (nvme0n1) 00:16:21.556 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:21.556 fio-3.35 00:16:21.556 Starting 1 thread 00:16:22.930 00:16:22.930 job0: (groupid=0, jobs=1): err= 0: pid=86823: Sat Dec 7 08:07:33 2024 00:16:22.930 read: IOPS=3511, BW=13.7MiB/s (14.4MB/s)(13.7MiB/1001msec) 00:16:22.930 slat (nsec): min=11937, max=59275, avg=13940.52, stdev=3830.07 00:16:22.930 clat (usec): min=117, max=691, avg=139.83, stdev=21.12 00:16:22.930 lat (usec): min=129, max=704, avg=153.77, stdev=21.59 00:16:22.930 clat percentiles (usec): 00:16:22.930 | 1.00th=[ 121], 5.00th=[ 124], 10.00th=[ 125], 20.00th=[ 128], 00:16:22.930 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:16:22.930 | 70.00th=[ 143], 80.00th=[ 151], 90.00th=[ 161], 95.00th=[ 169], 00:16:22.930 | 99.00th=[ 190], 99.50th=[ 198], 99.90th=[ 404], 99.95th=[ 529], 00:16:22.930 | 99.99th=[ 693] 00:16:22.930 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:16:22.930 slat (usec): min=14, max=111, avg=21.90, stdev= 6.32 00:16:22.930 clat (usec): min=86, max=364, avg=103.17, stdev=16.73 00:16:22.930 lat (usec): min=105, max=385, avg=125.06, stdev=18.38 00:16:22.930 clat percentiles (usec): 00:16:22.930 | 1.00th=[ 88], 5.00th=[ 91], 10.00th=[ 92], 20.00th=[ 94], 00:16:22.930 | 30.00th=[ 95], 40.00th=[ 97], 50.00th=[ 98], 60.00th=[ 100], 00:16:22.930 | 70.00th=[ 104], 80.00th=[ 112], 90.00th=[ 124], 95.00th=[ 131], 00:16:22.930 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 306], 99.95th=[ 363], 00:16:22.930 | 99.99th=[ 367] 00:16:22.930 bw ( KiB/s): min=16352, max=16352, per=100.00%, avg=16352.00, stdev= 0.00, samples=1 00:16:22.930 iops : min= 4088, max= 4088, avg=4088.00, stdev= 0.00, samples=1 00:16:22.930 lat (usec) : 100=30.06%, 250=69.70%, 500=0.21%, 750=0.03% 00:16:22.930 cpu : usr=2.30%, sys=9.70%, ctx=7099, majf=0, minf=5 00:16:22.930 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:22.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.930 issued rwts: total=3515,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.930 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:22.930 00:16:22.930 Run status group 0 (all jobs): 00:16:22.930 READ: bw=13.7MiB/s (14.4MB/s), 13.7MiB/s-13.7MiB/s (14.4MB/s-14.4MB/s), io=13.7MiB (14.4MB), run=1001-1001msec 00:16:22.930 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:16:22.931 00:16:22.931 Disk stats (read/write): 00:16:22.931 nvme0n1: ios=3122/3320, merge=0/0, ticks=451/375, in_queue=826, util=91.08% 00:16:22.931 08:07:33 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:22.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:22.931 08:07:33 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:22.931 08:07:33 -- common/autotest_common.sh@1208 -- # local i=0 00:16:22.931 08:07:33 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.931 08:07:33 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:22.931 08:07:33 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:22.931 08:07:33 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.931 08:07:33 -- common/autotest_common.sh@1220 -- # return 0 00:16:22.931 08:07:33 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:22.931 08:07:33 -- target/nmic.sh@53 -- # nvmftestfini 00:16:22.931 08:07:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:22.931 08:07:33 -- nvmf/common.sh@116 -- # sync 00:16:22.931 08:07:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:22.931 08:07:33 -- nvmf/common.sh@119 -- # set +e 00:16:22.931 08:07:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:22.931 08:07:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:22.931 rmmod nvme_tcp 00:16:22.931 rmmod nvme_fabrics 00:16:22.931 rmmod nvme_keyring 00:16:22.931 08:07:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:22.931 08:07:34 -- nvmf/common.sh@123 -- # set -e 00:16:22.931 08:07:34 -- nvmf/common.sh@124 -- # return 0 00:16:22.931 08:07:34 -- nvmf/common.sh@477 -- # '[' -n 86708 ']' 00:16:22.931 08:07:34 -- nvmf/common.sh@478 -- # killprocess 86708 00:16:22.931 08:07:34 -- common/autotest_common.sh@936 -- # '[' -z 86708 ']' 00:16:22.931 08:07:34 -- common/autotest_common.sh@940 -- # kill -0 86708 00:16:22.931 08:07:34 -- common/autotest_common.sh@941 -- # uname 00:16:22.931 08:07:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:22.931 08:07:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86708 00:16:22.931 08:07:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:22.931 08:07:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:22.931 killing process with pid 86708 00:16:22.931 08:07:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86708' 00:16:22.931 08:07:34 -- common/autotest_common.sh@955 -- # kill 86708 00:16:22.931 08:07:34 -- common/autotest_common.sh@960 -- # wait 86708 00:16:23.189 08:07:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:23.189 08:07:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:23.189 08:07:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:23.189 08:07:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:23.189 08:07:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:23.189 08:07:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.189 08:07:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.189 08:07:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.189 08:07:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:23.189 00:16:23.189 real 0m5.974s 00:16:23.189 user 0m20.193s 00:16:23.189 sys 0m1.413s 00:16:23.189 08:07:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:23.189 ************************************ 00:16:23.189 END TEST nvmf_nmic 00:16:23.189 08:07:34 -- common/autotest_common.sh@10 -- # set +x 00:16:23.189 ************************************ 00:16:23.189 08:07:34 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:23.189 08:07:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:23.189 08:07:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:23.189 08:07:34 -- common/autotest_common.sh@10 -- # set +x 00:16:23.189 ************************************ 00:16:23.189 START TEST nvmf_fio_target 00:16:23.189 ************************************ 00:16:23.189 08:07:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:23.189 * Looking for test storage... 00:16:23.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:23.189 08:07:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:23.189 08:07:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:23.189 08:07:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:23.448 08:07:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:23.448 08:07:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:23.448 08:07:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:23.448 08:07:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:23.448 08:07:34 -- scripts/common.sh@335 -- # IFS=.-: 00:16:23.448 08:07:34 -- scripts/common.sh@335 -- # read -ra ver1 00:16:23.448 08:07:34 -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.448 08:07:34 -- scripts/common.sh@336 -- # read -ra ver2 00:16:23.448 08:07:34 -- scripts/common.sh@337 -- # local 'op=<' 00:16:23.448 08:07:34 -- scripts/common.sh@339 -- # ver1_l=2 00:16:23.448 08:07:34 -- scripts/common.sh@340 -- # ver2_l=1 00:16:23.448 08:07:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:23.448 08:07:34 -- scripts/common.sh@343 -- # case "$op" in 00:16:23.448 08:07:34 -- scripts/common.sh@344 -- # : 1 00:16:23.448 08:07:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:23.448 08:07:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.448 08:07:34 -- scripts/common.sh@364 -- # decimal 1 00:16:23.448 08:07:34 -- scripts/common.sh@352 -- # local d=1 00:16:23.448 08:07:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.448 08:07:34 -- scripts/common.sh@354 -- # echo 1 00:16:23.448 08:07:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:23.448 08:07:34 -- scripts/common.sh@365 -- # decimal 2 00:16:23.448 08:07:34 -- scripts/common.sh@352 -- # local d=2 00:16:23.448 08:07:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.448 08:07:34 -- scripts/common.sh@354 -- # echo 2 00:16:23.448 08:07:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:23.448 08:07:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:23.448 08:07:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:23.448 08:07:34 -- scripts/common.sh@367 -- # return 0 00:16:23.448 08:07:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.448 08:07:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:23.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.448 --rc genhtml_branch_coverage=1 00:16:23.448 --rc genhtml_function_coverage=1 00:16:23.449 --rc genhtml_legend=1 00:16:23.449 --rc geninfo_all_blocks=1 00:16:23.449 --rc geninfo_unexecuted_blocks=1 00:16:23.449 00:16:23.449 ' 00:16:23.449 08:07:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:23.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.449 --rc genhtml_branch_coverage=1 00:16:23.449 --rc genhtml_function_coverage=1 00:16:23.449 --rc genhtml_legend=1 00:16:23.449 --rc geninfo_all_blocks=1 00:16:23.449 --rc geninfo_unexecuted_blocks=1 00:16:23.449 00:16:23.449 ' 00:16:23.449 08:07:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:23.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.449 --rc genhtml_branch_coverage=1 00:16:23.449 --rc genhtml_function_coverage=1 00:16:23.449 --rc genhtml_legend=1 00:16:23.449 --rc geninfo_all_blocks=1 00:16:23.449 --rc geninfo_unexecuted_blocks=1 00:16:23.449 00:16:23.449 ' 00:16:23.449 08:07:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:23.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.449 --rc genhtml_branch_coverage=1 00:16:23.449 --rc genhtml_function_coverage=1 00:16:23.449 --rc genhtml_legend=1 00:16:23.449 --rc geninfo_all_blocks=1 00:16:23.449 --rc geninfo_unexecuted_blocks=1 00:16:23.449 00:16:23.449 ' 00:16:23.449 08:07:34 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:23.449 08:07:34 -- nvmf/common.sh@7 -- # uname -s 00:16:23.449 08:07:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.449 08:07:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.449 08:07:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.449 08:07:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.449 08:07:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.449 08:07:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.449 08:07:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.449 08:07:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.449 08:07:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.449 08:07:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.449 08:07:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:16:23.449 08:07:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:16:23.449 08:07:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.449 08:07:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.449 08:07:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:23.449 08:07:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:23.449 08:07:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.449 08:07:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.449 08:07:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.449 08:07:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.449 08:07:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.449 08:07:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.449 08:07:34 -- paths/export.sh@5 -- # export PATH 00:16:23.449 08:07:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.449 08:07:34 -- nvmf/common.sh@46 -- # : 0 00:16:23.449 08:07:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:23.449 08:07:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:23.449 08:07:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:23.449 08:07:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.449 08:07:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.449 08:07:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:23.449 08:07:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:23.449 08:07:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:23.449 08:07:34 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:23.449 08:07:34 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:23.449 08:07:34 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:23.449 08:07:34 -- target/fio.sh@16 -- # nvmftestinit 00:16:23.449 08:07:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:23.449 08:07:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.449 08:07:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:23.449 08:07:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:23.449 08:07:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:23.449 08:07:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.449 08:07:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.449 08:07:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.449 08:07:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:23.449 08:07:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:23.449 08:07:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:23.449 08:07:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:23.449 08:07:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:23.449 08:07:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:23.449 08:07:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.449 08:07:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.449 08:07:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:23.449 08:07:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:23.449 08:07:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:23.449 08:07:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:23.449 08:07:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:23.449 08:07:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.449 08:07:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:23.449 08:07:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:23.449 08:07:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:23.449 08:07:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:23.449 08:07:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:23.449 08:07:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:23.449 Cannot find device "nvmf_tgt_br" 00:16:23.449 08:07:34 -- nvmf/common.sh@154 -- # true 00:16:23.449 08:07:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.449 Cannot find device "nvmf_tgt_br2" 00:16:23.449 08:07:34 -- nvmf/common.sh@155 -- # true 00:16:23.449 08:07:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:23.449 08:07:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:23.449 Cannot find device "nvmf_tgt_br" 00:16:23.449 08:07:34 -- nvmf/common.sh@157 -- # true 00:16:23.449 08:07:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:23.449 Cannot find device "nvmf_tgt_br2" 00:16:23.449 08:07:34 -- nvmf/common.sh@158 -- # true 00:16:23.449 08:07:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:23.449 08:07:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:23.450 08:07:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.450 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.450 08:07:34 -- nvmf/common.sh@161 -- # true 00:16:23.450 08:07:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.450 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.708 08:07:34 -- nvmf/common.sh@162 -- # true 00:16:23.709 08:07:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:23.709 08:07:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:23.709 08:07:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:23.709 08:07:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:23.709 08:07:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.709 08:07:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.709 08:07:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.709 08:07:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:23.709 08:07:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:23.709 08:07:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:23.709 08:07:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:23.709 08:07:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:23.709 08:07:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:23.709 08:07:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.709 08:07:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:23.709 08:07:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:23.709 08:07:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:23.709 08:07:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:23.709 08:07:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:23.709 08:07:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:23.709 08:07:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:23.709 08:07:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:23.709 08:07:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:23.709 08:07:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:23.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:16:23.709 00:16:23.709 --- 10.0.0.2 ping statistics --- 00:16:23.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.709 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:23.709 08:07:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:23.709 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:23.709 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:16:23.709 00:16:23.709 --- 10.0.0.3 ping statistics --- 00:16:23.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.709 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:16:23.709 08:07:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:23.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:23.709 00:16:23.709 --- 10.0.0.1 ping statistics --- 00:16:23.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.709 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:23.709 08:07:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.709 08:07:34 -- nvmf/common.sh@421 -- # return 0 00:16:23.709 08:07:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:23.709 08:07:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.709 08:07:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:23.709 08:07:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:23.709 08:07:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.709 08:07:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:23.709 08:07:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:23.709 08:07:34 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:23.709 08:07:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:23.709 08:07:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:23.709 08:07:34 -- common/autotest_common.sh@10 -- # set +x 00:16:23.709 08:07:34 -- nvmf/common.sh@469 -- # nvmfpid=87010 00:16:23.709 08:07:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:23.709 08:07:34 -- nvmf/common.sh@470 -- # waitforlisten 87010 00:16:23.709 08:07:34 -- common/autotest_common.sh@829 -- # '[' -z 87010 ']' 00:16:23.709 08:07:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.709 08:07:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:23.709 08:07:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.709 08:07:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:23.709 08:07:34 -- common/autotest_common.sh@10 -- # set +x 00:16:23.709 [2024-12-07 08:07:34.956747] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:23.709 [2024-12-07 08:07:34.956872] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.035 [2024-12-07 08:07:35.098961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:24.035 [2024-12-07 08:07:35.182110] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:24.035 [2024-12-07 08:07:35.182317] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.035 [2024-12-07 08:07:35.182337] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.035 [2024-12-07 08:07:35.182348] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.035 [2024-12-07 08:07:35.182454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.035 [2024-12-07 08:07:35.182523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.035 [2024-12-07 08:07:35.183179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:24.035 [2024-12-07 08:07:35.183245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.965 08:07:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:24.965 08:07:35 -- common/autotest_common.sh@862 -- # return 0 00:16:24.965 08:07:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:24.965 08:07:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:24.965 08:07:35 -- common/autotest_common.sh@10 -- # set +x 00:16:24.965 08:07:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.965 08:07:36 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:25.222 [2024-12-07 08:07:36.285183] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.222 08:07:36 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:25.478 08:07:36 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:25.478 08:07:36 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:25.736 08:07:36 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:25.736 08:07:36 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:25.996 08:07:37 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:25.996 08:07:37 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:26.252 08:07:37 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:26.252 08:07:37 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:26.508 08:07:37 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:26.765 08:07:38 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:26.765 08:07:38 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:27.329 08:07:38 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:27.329 08:07:38 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:27.586 08:07:38 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:27.586 08:07:38 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:27.843 08:07:38 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:28.100 08:07:39 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:28.100 08:07:39 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:28.357 08:07:39 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:28.357 08:07:39 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:28.614 08:07:39 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:28.614 [2024-12-07 08:07:39.887190] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.871 08:07:39 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:28.871 08:07:40 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:29.129 08:07:40 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:29.387 08:07:40 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:29.387 08:07:40 -- common/autotest_common.sh@1187 -- # local i=0 00:16:29.387 08:07:40 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:29.387 08:07:40 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:16:29.387 08:07:40 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:16:29.387 08:07:40 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:31.294 08:07:42 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:31.294 08:07:42 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:31.294 08:07:42 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:31.557 08:07:42 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:16:31.557 08:07:42 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:31.557 08:07:42 -- common/autotest_common.sh@1197 -- # return 0 00:16:31.557 08:07:42 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:31.557 [global] 00:16:31.557 thread=1 00:16:31.557 invalidate=1 00:16:31.557 rw=write 00:16:31.557 time_based=1 00:16:31.557 runtime=1 00:16:31.557 ioengine=libaio 00:16:31.557 direct=1 00:16:31.557 bs=4096 00:16:31.557 iodepth=1 00:16:31.557 norandommap=0 00:16:31.557 numjobs=1 00:16:31.557 00:16:31.557 verify_dump=1 00:16:31.557 verify_backlog=512 00:16:31.557 verify_state_save=0 00:16:31.557 do_verify=1 00:16:31.557 verify=crc32c-intel 00:16:31.557 [job0] 00:16:31.557 filename=/dev/nvme0n1 00:16:31.557 [job1] 00:16:31.557 filename=/dev/nvme0n2 00:16:31.557 [job2] 00:16:31.557 filename=/dev/nvme0n3 00:16:31.557 [job3] 00:16:31.557 filename=/dev/nvme0n4 00:16:31.557 Could not set queue depth (nvme0n1) 00:16:31.557 Could not set queue depth (nvme0n2) 00:16:31.557 Could not set queue depth (nvme0n3) 00:16:31.557 Could not set queue depth (nvme0n4) 00:16:31.557 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:31.557 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:31.557 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:31.557 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:31.557 fio-3.35 00:16:31.557 Starting 4 threads 00:16:32.930 00:16:32.930 job0: (groupid=0, jobs=1): err= 0: pid=87308: Sat Dec 7 08:07:43 2024 00:16:32.930 read: IOPS=2127, BW=8511KiB/s (8716kB/s)(8520KiB/1001msec) 00:16:32.930 slat (nsec): min=11785, max=28410, avg=13906.01, stdev=1972.74 00:16:32.930 clat (usec): min=126, max=392, avg=218.87, stdev=47.38 00:16:32.930 lat (usec): min=139, max=410, avg=232.78, stdev=47.87 00:16:32.930 clat percentiles (usec): 00:16:32.930 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 151], 00:16:32.930 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 243], 00:16:32.930 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 258], 95.00th=[ 262], 00:16:32.930 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 379], 99.95th=[ 388], 00:16:32.930 | 99.99th=[ 392] 00:16:32.930 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:32.930 slat (usec): min=17, max=132, avg=21.12, stdev= 4.72 00:16:32.930 clat (usec): min=90, max=1657, avg=173.18, stdev=55.04 00:16:32.930 lat (usec): min=109, max=1680, avg=194.30, stdev=55.98 00:16:32.930 clat percentiles (usec): 00:16:32.930 | 1.00th=[ 96], 5.00th=[ 100], 10.00th=[ 103], 20.00th=[ 112], 00:16:32.930 | 30.00th=[ 172], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 198], 00:16:32.930 | 70.00th=[ 200], 80.00th=[ 202], 90.00th=[ 208], 95.00th=[ 212], 00:16:32.930 | 99.00th=[ 221], 99.50th=[ 225], 99.90th=[ 816], 99.95th=[ 930], 00:16:32.930 | 99.99th=[ 1663] 00:16:32.930 bw ( KiB/s): min= 8208, max= 8208, per=23.60%, avg=8208.00, stdev= 0.00, samples=1 00:16:32.930 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:16:32.930 lat (usec) : 100=3.03%, 250=87.16%, 500=9.74%, 1000=0.04% 00:16:32.930 lat (msec) : 2=0.02% 00:16:32.930 cpu : usr=1.70%, sys=6.10%, ctx=4690, majf=0, minf=7 00:16:32.930 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:32.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.930 issued rwts: total=2130,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.930 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:32.930 job1: (groupid=0, jobs=1): err= 0: pid=87309: Sat Dec 7 08:07:43 2024 00:16:32.930 read: IOPS=1781, BW=7125KiB/s (7296kB/s)(7132KiB/1001msec) 00:16:32.930 slat (nsec): min=11792, max=34402, avg=14727.14, stdev=1940.45 00:16:32.930 clat (usec): min=229, max=739, avg=278.87, stdev=83.39 00:16:32.930 lat (usec): min=243, max=752, avg=293.60, stdev=83.46 00:16:32.930 clat percentiles (usec): 00:16:32.930 | 1.00th=[ 233], 5.00th=[ 237], 10.00th=[ 237], 20.00th=[ 239], 00:16:32.930 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:16:32.930 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 469], 95.00th=[ 490], 00:16:32.930 | 99.00th=[ 515], 99.50th=[ 545], 99.90th=[ 725], 99.95th=[ 742], 00:16:32.930 | 99.99th=[ 742] 00:16:32.930 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:32.930 slat (usec): min=19, max=125, avg=23.41, stdev= 5.24 00:16:32.930 clat (usec): min=104, max=603, avg=206.08, stdev=29.15 00:16:32.930 lat (usec): min=125, max=626, avg=229.49, stdev=29.99 00:16:32.930 clat percentiles (usec): 00:16:32.930 | 1.00th=[ 176], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 192], 00:16:32.930 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 200], 00:16:32.930 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 245], 95.00th=[ 269], 00:16:32.930 | 99.00th=[ 293], 99.50th=[ 326], 99.90th=[ 474], 99.95th=[ 537], 00:16:32.930 | 99.99th=[ 603] 00:16:32.930 bw ( KiB/s): min= 8264, max= 8264, per=23.76%, avg=8264.00, stdev= 0.00, samples=1 00:16:32.930 iops : min= 2066, max= 2066, avg=2066.00, stdev= 0.00, samples=1 00:16:32.930 lat (usec) : 250=78.13%, 500=20.86%, 750=1.02% 00:16:32.930 cpu : usr=1.50%, sys=5.50%, ctx=3837, majf=0, minf=7 00:16:32.930 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:32.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.930 issued rwts: total=1783,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.930 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:32.930 job2: (groupid=0, jobs=1): err= 0: pid=87310: Sat Dec 7 08:07:43 2024 00:16:32.930 read: IOPS=1782, BW=7129KiB/s (7300kB/s)(7136KiB/1001msec) 00:16:32.930 slat (nsec): min=11800, max=27117, avg=14503.13, stdev=1560.23 00:16:32.930 clat (usec): min=168, max=724, avg=278.73, stdev=83.34 00:16:32.930 lat (usec): min=183, max=740, avg=293.23, stdev=83.43 00:16:32.930 clat percentiles (usec): 00:16:32.930 | 1.00th=[ 235], 5.00th=[ 237], 10.00th=[ 239], 20.00th=[ 241], 00:16:32.930 | 30.00th=[ 243], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:16:32.930 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 469], 95.00th=[ 490], 00:16:32.930 | 99.00th=[ 515], 99.50th=[ 545], 99.90th=[ 717], 99.95th=[ 725], 00:16:32.930 | 99.99th=[ 725] 00:16:32.930 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:32.930 slat (usec): min=14, max=125, avg=22.14, stdev= 5.11 00:16:32.930 clat (usec): min=108, max=2619, avg=207.82, stdev=61.58 00:16:32.930 lat (usec): min=128, max=2640, avg=229.95, stdev=60.97 00:16:32.930 clat percentiles (usec): 00:16:32.930 | 1.00th=[ 169], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 194], 00:16:32.930 | 30.00th=[ 196], 40.00th=[ 196], 50.00th=[ 198], 60.00th=[ 200], 00:16:32.930 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 251], 95.00th=[ 277], 00:16:32.930 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 553], 99.95th=[ 619], 00:16:32.930 | 99.99th=[ 2606] 00:16:32.930 bw ( KiB/s): min= 8248, max= 8248, per=23.71%, avg=8248.00, stdev= 0.00, samples=1 00:16:32.930 iops : min= 2062, max= 2062, avg=2062.00, stdev= 0.00, samples=1 00:16:32.930 lat (usec) : 250=76.83%, 500=22.05%, 750=1.10% 00:16:32.930 lat (msec) : 4=0.03% 00:16:32.930 cpu : usr=1.50%, sys=5.20%, ctx=3832, majf=0, minf=17 00:16:32.930 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:32.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.930 issued rwts: total=1784,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.930 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:32.930 job3: (groupid=0, jobs=1): err= 0: pid=87311: Sat Dec 7 08:07:43 2024 00:16:32.930 read: IOPS=2015, BW=8064KiB/s (8257kB/s)(8072KiB/1001msec) 00:16:32.930 slat (nsec): min=11567, max=41374, avg=14189.36, stdev=2770.77 00:16:32.930 clat (usec): min=178, max=553, avg=246.41, stdev=21.54 00:16:32.930 lat (usec): min=191, max=573, avg=260.60, stdev=22.11 00:16:32.930 clat percentiles (usec): 00:16:32.930 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 237], 00:16:32.930 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:16:32.930 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 265], 00:16:32.930 | 99.00th=[ 343], 99.50th=[ 379], 99.90th=[ 457], 99.95th=[ 506], 00:16:32.930 | 99.99th=[ 553] 00:16:32.930 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:32.930 slat (usec): min=18, max=104, avg=22.56, stdev= 5.39 00:16:32.930 clat (usec): min=108, max=557, avg=206.06, stdev=26.16 00:16:32.930 lat (usec): min=128, max=586, avg=228.62, stdev=28.13 00:16:32.930 clat percentiles (usec): 00:16:32.930 | 1.00th=[ 178], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 194], 00:16:32.930 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 202], 00:16:32.930 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 235], 95.00th=[ 260], 00:16:32.930 | 99.00th=[ 289], 99.50th=[ 330], 99.90th=[ 498], 99.95th=[ 515], 00:16:32.930 | 99.99th=[ 562] 00:16:32.930 bw ( KiB/s): min= 8296, max= 8296, per=23.85%, avg=8296.00, stdev= 0.00, samples=1 00:16:32.930 iops : min= 2074, max= 2074, avg=2074.00, stdev= 0.00, samples=1 00:16:32.930 lat (usec) : 250=82.19%, 500=17.71%, 750=0.10% 00:16:32.930 cpu : usr=1.50%, sys=5.40%, ctx=4066, majf=0, minf=8 00:16:32.930 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:32.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.930 issued rwts: total=2018,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.930 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:32.930 00:16:32.930 Run status group 0 (all jobs): 00:16:32.930 READ: bw=30.1MiB/s (31.6MB/s), 7125KiB/s-8511KiB/s (7296kB/s-8716kB/s), io=30.1MiB (31.6MB), run=1001-1001msec 00:16:32.930 WRITE: bw=34.0MiB/s (35.6MB/s), 8184KiB/s-9.99MiB/s (8380kB/s-10.5MB/s), io=34.0MiB (35.7MB), run=1001-1001msec 00:16:32.930 00:16:32.930 Disk stats (read/write): 00:16:32.930 nvme0n1: ios=1715/2048, merge=0/0, ticks=435/410, in_queue=845, util=88.28% 00:16:32.930 nvme0n2: ios=1575/1996, merge=0/0, ticks=410/427, in_queue=837, util=88.06% 00:16:32.930 nvme0n3: ios=1536/1997, merge=0/0, ticks=381/422, in_queue=803, util=89.25% 00:16:32.930 nvme0n4: ios=1536/2004, merge=0/0, ticks=385/432, in_queue=817, util=89.80% 00:16:32.930 08:07:43 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:32.930 [global] 00:16:32.930 thread=1 00:16:32.930 invalidate=1 00:16:32.930 rw=randwrite 00:16:32.930 time_based=1 00:16:32.931 runtime=1 00:16:32.931 ioengine=libaio 00:16:32.931 direct=1 00:16:32.931 bs=4096 00:16:32.931 iodepth=1 00:16:32.931 norandommap=0 00:16:32.931 numjobs=1 00:16:32.931 00:16:32.931 verify_dump=1 00:16:32.931 verify_backlog=512 00:16:32.931 verify_state_save=0 00:16:32.931 do_verify=1 00:16:32.931 verify=crc32c-intel 00:16:32.931 [job0] 00:16:32.931 filename=/dev/nvme0n1 00:16:32.931 [job1] 00:16:32.931 filename=/dev/nvme0n2 00:16:32.931 [job2] 00:16:32.931 filename=/dev/nvme0n3 00:16:32.931 [job3] 00:16:32.931 filename=/dev/nvme0n4 00:16:32.931 Could not set queue depth (nvme0n1) 00:16:32.931 Could not set queue depth (nvme0n2) 00:16:32.931 Could not set queue depth (nvme0n3) 00:16:32.931 Could not set queue depth (nvme0n4) 00:16:32.931 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:32.931 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:32.931 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:32.931 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:32.931 fio-3.35 00:16:32.931 Starting 4 threads 00:16:34.305 00:16:34.305 job0: (groupid=0, jobs=1): err= 0: pid=87364: Sat Dec 7 08:07:45 2024 00:16:34.305 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:34.305 slat (nsec): min=11561, max=44950, avg=13560.81, stdev=2947.24 00:16:34.305 clat (usec): min=126, max=2847, avg=179.79, stdev=89.15 00:16:34.305 lat (usec): min=139, max=2873, avg=193.35, stdev=89.81 00:16:34.305 clat percentiles (usec): 00:16:34.306 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:16:34.306 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 157], 00:16:34.306 | 70.00th=[ 161], 80.00th=[ 227], 90.00th=[ 262], 95.00th=[ 302], 00:16:34.306 | 99.00th=[ 379], 99.50th=[ 433], 99.90th=[ 1057], 99.95th=[ 1942], 00:16:34.306 | 99.99th=[ 2835] 00:16:34.306 write: IOPS=2958, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1001msec); 0 zone resets 00:16:34.306 slat (nsec): min=10321, max=93673, avg=19085.63, stdev=4582.69 00:16:34.306 clat (usec): min=97, max=306, avg=148.78, stdev=49.65 00:16:34.306 lat (usec): min=116, max=325, avg=167.87, stdev=48.74 00:16:34.306 clat percentiles (usec): 00:16:34.306 | 1.00th=[ 103], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 118], 00:16:34.306 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 131], 00:16:34.306 | 70.00th=[ 137], 80.00th=[ 182], 90.00th=[ 245], 95.00th=[ 260], 00:16:34.306 | 99.00th=[ 285], 99.50th=[ 289], 99.90th=[ 306], 99.95th=[ 306], 00:16:34.306 | 99.99th=[ 306] 00:16:34.306 bw ( KiB/s): min=13032, max=13032, per=27.29%, avg=13032.00, stdev= 0.00, samples=1 00:16:34.306 iops : min= 3258, max= 3258, avg=3258.00, stdev= 0.00, samples=1 00:16:34.306 lat (usec) : 100=0.11%, 250=89.11%, 500=10.61%, 750=0.07%, 1000=0.04% 00:16:34.306 lat (msec) : 2=0.04%, 4=0.02% 00:16:34.306 cpu : usr=1.60%, sys=7.10%, ctx=5522, majf=0, minf=13 00:16:34.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:34.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.306 issued rwts: total=2560,2961,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:34.306 job1: (groupid=0, jobs=1): err= 0: pid=87365: Sat Dec 7 08:07:45 2024 00:16:34.306 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:34.306 slat (nsec): min=11807, max=45382, avg=13311.50, stdev=2366.74 00:16:34.306 clat (usec): min=125, max=2819, avg=153.10, stdev=61.74 00:16:34.306 lat (usec): min=138, max=2832, avg=166.41, stdev=61.89 00:16:34.306 clat percentiles (usec): 00:16:34.306 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 143], 00:16:34.306 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 153], 00:16:34.306 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 174], 00:16:34.306 | 99.00th=[ 188], 99.50th=[ 206], 99.90th=[ 523], 99.95th=[ 1762], 00:16:34.306 | 99.99th=[ 2835] 00:16:34.306 write: IOPS=3353, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1001msec); 0 zone resets 00:16:34.306 slat (nsec): min=17411, max=89459, avg=19764.39, stdev=3750.84 00:16:34.306 clat (usec): min=94, max=2539, avg=123.12, stdev=55.87 00:16:34.306 lat (usec): min=112, max=2558, avg=142.88, stdev=56.04 00:16:34.306 clat percentiles (usec): 00:16:34.306 | 1.00th=[ 99], 5.00th=[ 104], 10.00th=[ 108], 20.00th=[ 113], 00:16:34.306 | 30.00th=[ 116], 40.00th=[ 119], 50.00th=[ 121], 60.00th=[ 123], 00:16:34.306 | 70.00th=[ 126], 80.00th=[ 130], 90.00th=[ 137], 95.00th=[ 141], 00:16:34.306 | 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 482], 99.95th=[ 1975], 00:16:34.306 | 99.99th=[ 2540] 00:16:34.306 bw ( KiB/s): min=13672, max=13672, per=28.63%, avg=13672.00, stdev= 0.00, samples=1 00:16:34.306 iops : min= 3418, max= 3418, avg=3418.00, stdev= 0.00, samples=1 00:16:34.306 lat (usec) : 100=0.81%, 250=98.86%, 500=0.23%, 750=0.02% 00:16:34.306 lat (msec) : 2=0.05%, 4=0.03% 00:16:34.306 cpu : usr=2.10%, sys=8.00%, ctx=6430, majf=0, minf=7 00:16:34.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:34.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.306 issued rwts: total=3072,3357,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:34.306 job2: (groupid=0, jobs=1): err= 0: pid=87366: Sat Dec 7 08:07:45 2024 00:16:34.306 read: IOPS=2472, BW=9890KiB/s (10.1MB/s)(9900KiB/1001msec) 00:16:34.306 slat (usec): min=10, max=120, avg=15.95, stdev= 8.11 00:16:34.306 clat (usec): min=59, max=3270, avg=201.15, stdev=95.28 00:16:34.306 lat (usec): min=154, max=3286, avg=217.10, stdev=94.63 00:16:34.306 clat percentiles (usec): 00:16:34.306 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:16:34.306 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 178], 00:16:34.306 | 70.00th=[ 229], 80.00th=[ 251], 90.00th=[ 281], 95.00th=[ 330], 00:16:34.306 | 99.00th=[ 371], 99.50th=[ 379], 99.90th=[ 1549], 99.95th=[ 1844], 00:16:34.306 | 99.99th=[ 3261] 00:16:34.306 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:34.306 slat (nsec): min=10257, max=93827, avg=21884.66, stdev=5991.51 00:16:34.306 clat (usec): min=106, max=8094, avg=155.73, stdev=165.44 00:16:34.306 lat (usec): min=125, max=8114, avg=177.62, stdev=165.11 00:16:34.306 clat percentiles (usec): 00:16:34.306 | 1.00th=[ 117], 5.00th=[ 122], 10.00th=[ 125], 20.00th=[ 129], 00:16:34.306 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:16:34.306 | 70.00th=[ 145], 80.00th=[ 155], 90.00th=[ 231], 95.00th=[ 255], 00:16:34.306 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 1057], 99.95th=[ 1582], 00:16:34.306 | 99.99th=[ 8094] 00:16:34.306 bw ( KiB/s): min=12288, max=12288, per=25.73%, avg=12288.00, stdev= 0.00, samples=1 00:16:34.306 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:34.306 lat (usec) : 100=0.10%, 250=86.89%, 500=12.85%, 750=0.04% 00:16:34.306 lat (msec) : 2=0.08%, 4=0.02%, 10=0.02% 00:16:34.306 cpu : usr=1.40%, sys=7.70%, ctx=5066, majf=0, minf=14 00:16:34.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:34.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.306 issued rwts: total=2475,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:34.306 job3: (groupid=0, jobs=1): err= 0: pid=87367: Sat Dec 7 08:07:45 2024 00:16:34.306 read: IOPS=2763, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec) 00:16:34.306 slat (nsec): min=11968, max=42185, avg=13963.94, stdev=2732.34 00:16:34.306 clat (usec): min=141, max=646, avg=169.02, stdev=18.60 00:16:34.306 lat (usec): min=153, max=672, avg=182.99, stdev=18.88 00:16:34.306 clat percentiles (usec): 00:16:34.306 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:16:34.306 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:16:34.306 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 194], 00:16:34.306 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 388], 99.95th=[ 506], 00:16:34.306 | 99.99th=[ 644] 00:16:34.306 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:34.306 slat (nsec): min=17814, max=74854, avg=21334.71, stdev=4385.60 00:16:34.306 clat (usec): min=108, max=579, avg=136.80, stdev=16.42 00:16:34.306 lat (usec): min=127, max=614, avg=158.14, stdev=17.46 00:16:34.306 clat percentiles (usec): 00:16:34.306 | 1.00th=[ 115], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 127], 00:16:34.306 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:16:34.306 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 159], 00:16:34.306 | 99.00th=[ 174], 99.50th=[ 178], 99.90th=[ 231], 99.95th=[ 529], 00:16:34.306 | 99.99th=[ 578] 00:16:34.306 bw ( KiB/s): min=12288, max=12288, per=25.73%, avg=12288.00, stdev= 0.00, samples=1 00:16:34.306 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:34.306 lat (usec) : 250=99.86%, 500=0.07%, 750=0.07% 00:16:34.306 cpu : usr=1.80%, sys=8.00%, ctx=5838, majf=0, minf=11 00:16:34.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:34.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.306 issued rwts: total=2766,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:34.306 00:16:34.306 Run status group 0 (all jobs): 00:16:34.306 READ: bw=42.4MiB/s (44.5MB/s), 9890KiB/s-12.0MiB/s (10.1MB/s-12.6MB/s), io=42.5MiB (44.5MB), run=1001-1001msec 00:16:34.306 WRITE: bw=46.6MiB/s (48.9MB/s), 9.99MiB/s-13.1MiB/s (10.5MB/s-13.7MB/s), io=46.7MiB (48.9MB), run=1001-1001msec 00:16:34.306 00:16:34.306 Disk stats (read/write): 00:16:34.306 nvme0n1: ios=2469/2560, merge=0/0, ticks=457/371, in_queue=828, util=88.87% 00:16:34.306 nvme0n2: ios=2575/3051, merge=0/0, ticks=417/401, in_queue=818, util=88.54% 00:16:34.306 nvme0n3: ios=2048/2526, merge=0/0, ticks=386/407, in_queue=793, util=89.00% 00:16:34.306 nvme0n4: ios=2494/2560, merge=0/0, ticks=434/371, in_queue=805, util=89.88% 00:16:34.306 08:07:45 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:34.306 [global] 00:16:34.306 thread=1 00:16:34.306 invalidate=1 00:16:34.306 rw=write 00:16:34.306 time_based=1 00:16:34.306 runtime=1 00:16:34.306 ioengine=libaio 00:16:34.306 direct=1 00:16:34.306 bs=4096 00:16:34.306 iodepth=128 00:16:34.306 norandommap=0 00:16:34.306 numjobs=1 00:16:34.306 00:16:34.306 verify_dump=1 00:16:34.306 verify_backlog=512 00:16:34.306 verify_state_save=0 00:16:34.306 do_verify=1 00:16:34.306 verify=crc32c-intel 00:16:34.306 [job0] 00:16:34.306 filename=/dev/nvme0n1 00:16:34.306 [job1] 00:16:34.306 filename=/dev/nvme0n2 00:16:34.306 [job2] 00:16:34.306 filename=/dev/nvme0n3 00:16:34.306 [job3] 00:16:34.306 filename=/dev/nvme0n4 00:16:34.306 Could not set queue depth (nvme0n1) 00:16:34.306 Could not set queue depth (nvme0n2) 00:16:34.306 Could not set queue depth (nvme0n3) 00:16:34.306 Could not set queue depth (nvme0n4) 00:16:34.306 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:34.306 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:34.306 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:34.306 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:34.306 fio-3.35 00:16:34.306 Starting 4 threads 00:16:35.682 00:16:35.682 job0: (groupid=0, jobs=1): err= 0: pid=87431: Sat Dec 7 08:07:46 2024 00:16:35.682 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:16:35.682 slat (usec): min=4, max=3468, avg=82.60, stdev=389.83 00:16:35.682 clat (usec): min=8017, max=14338, avg=11035.48, stdev=1068.36 00:16:35.682 lat (usec): min=8031, max=14658, avg=11118.08, stdev=1053.76 00:16:35.682 clat percentiles (usec): 00:16:35.682 | 1.00th=[ 8356], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10421], 00:16:35.682 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:16:35.682 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12780], 00:16:35.682 | 99.00th=[13435], 99.50th=[13698], 99.90th=[14091], 99.95th=[14091], 00:16:35.682 | 99.99th=[14353] 00:16:35.682 write: IOPS=5889, BW=23.0MiB/s (24.1MB/s)(23.1MiB/1002msec); 0 zone resets 00:16:35.682 slat (usec): min=10, max=3303, avg=83.41, stdev=358.78 00:16:35.682 clat (usec): min=261, max=14602, avg=10948.04, stdev=1448.75 00:16:35.682 lat (usec): min=3078, max=14621, avg=11031.45, stdev=1426.12 00:16:35.682 clat percentiles (usec): 00:16:35.682 | 1.00th=[ 7111], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9634], 00:16:35.682 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11469], 60.00th=[11600], 00:16:35.682 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11994], 95.00th=[12387], 00:16:35.682 | 99.00th=[13829], 99.50th=[14222], 99.90th=[14615], 99.95th=[14615], 00:16:35.682 | 99.99th=[14615] 00:16:35.682 bw ( KiB/s): min=21564, max=24576, per=34.36%, avg=23070.00, stdev=2129.81, samples=2 00:16:35.682 iops : min= 5391, max= 6144, avg=5767.50, stdev=532.45, samples=2 00:16:35.682 lat (usec) : 500=0.01% 00:16:35.682 lat (msec) : 4=0.35%, 10=18.78%, 20=80.86% 00:16:35.682 cpu : usr=4.90%, sys=15.08%, ctx=751, majf=0, minf=6 00:16:35.682 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:35.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.682 issued rwts: total=5632,5901,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.683 job1: (groupid=0, jobs=1): err= 0: pid=87432: Sat Dec 7 08:07:46 2024 00:16:35.683 read: IOPS=5460, BW=21.3MiB/s (22.4MB/s)(21.4MiB/1002msec) 00:16:35.683 slat (usec): min=7, max=2527, avg=84.65, stdev=373.09 00:16:35.683 clat (usec): min=363, max=13821, avg=11147.44, stdev=1084.20 00:16:35.683 lat (usec): min=1390, max=13829, avg=11232.09, stdev=1033.40 00:16:35.683 clat percentiles (usec): 00:16:35.683 | 1.00th=[ 6325], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10814], 00:16:35.683 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:16:35.683 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11994], 95.00th=[12256], 00:16:35.683 | 99.00th=[13042], 99.50th=[13304], 99.90th=[13566], 99.95th=[13566], 00:16:35.683 | 99.99th=[13829] 00:16:35.683 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:16:35.683 slat (usec): min=10, max=6594, avg=88.48, stdev=374.56 00:16:35.683 clat (usec): min=8675, max=27694, avg=11507.74, stdev=2047.14 00:16:35.683 lat (usec): min=8696, max=27719, avg=11596.22, stdev=2052.12 00:16:35.683 clat percentiles (usec): 00:16:35.683 | 1.00th=[ 9110], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:16:35.683 | 30.00th=[10421], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:16:35.683 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12649], 95.00th=[13042], 00:16:35.683 | 99.00th=[22414], 99.50th=[26346], 99.90th=[27657], 99.95th=[27657], 00:16:35.683 | 99.99th=[27657] 00:16:35.683 bw ( KiB/s): min=22568, max=22568, per=33.61%, avg=22568.00, stdev= 0.00, samples=1 00:16:35.683 iops : min= 5642, max= 5642, avg=5642.00, stdev= 0.00, samples=1 00:16:35.683 lat (usec) : 500=0.01% 00:16:35.683 lat (msec) : 2=0.04%, 4=0.29%, 10=14.18%, 20=84.66%, 50=0.83% 00:16:35.683 cpu : usr=4.20%, sys=14.19%, ctx=806, majf=0, minf=9 00:16:35.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:35.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.683 issued rwts: total=5471,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.683 job2: (groupid=0, jobs=1): err= 0: pid=87433: Sat Dec 7 08:07:46 2024 00:16:35.683 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:16:35.683 slat (usec): min=6, max=7853, avg=186.95, stdev=799.98 00:16:35.683 clat (usec): min=17522, max=32096, avg=24799.15, stdev=1582.76 00:16:35.683 lat (usec): min=17542, max=32111, avg=24986.10, stdev=1393.79 00:16:35.683 clat percentiles (usec): 00:16:35.683 | 1.00th=[19268], 5.00th=[22414], 10.00th=[23462], 20.00th=[23987], 00:16:35.683 | 30.00th=[24249], 40.00th=[24511], 50.00th=[24773], 60.00th=[25035], 00:16:35.683 | 70.00th=[25035], 80.00th=[25560], 90.00th=[26870], 95.00th=[27395], 00:16:35.683 | 99.00th=[28181], 99.50th=[28967], 99.90th=[28967], 99.95th=[32113], 00:16:35.683 | 99.99th=[32113] 00:16:35.683 write: IOPS=2624, BW=10.3MiB/s (10.7MB/s)(10.3MiB/1004msec); 0 zone resets 00:16:35.683 slat (usec): min=13, max=6660, avg=190.05, stdev=894.01 00:16:35.683 clat (usec): min=213, max=32414, avg=23720.13, stdev=3231.56 00:16:35.683 lat (usec): min=3587, max=32439, avg=23910.18, stdev=3135.09 00:16:35.683 clat percentiles (usec): 00:16:35.683 | 1.00th=[ 4424], 5.00th=[19530], 10.00th=[21627], 20.00th=[22938], 00:16:35.683 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:16:35.683 | 70.00th=[24511], 80.00th=[25035], 90.00th=[25822], 95.00th=[27132], 00:16:35.683 | 99.00th=[28967], 99.50th=[28967], 99.90th=[32375], 99.95th=[32375], 00:16:35.683 | 99.99th=[32375] 00:16:35.683 bw ( KiB/s): min= 8934, max=11551, per=15.25%, avg=10242.50, stdev=1850.50, samples=2 00:16:35.683 iops : min= 2233, max= 2887, avg=2560.00, stdev=462.45, samples=2 00:16:35.683 lat (usec) : 250=0.02% 00:16:35.683 lat (msec) : 4=0.23%, 10=0.58%, 20=2.77%, 50=96.40% 00:16:35.683 cpu : usr=3.69%, sys=7.18%, ctx=245, majf=0, minf=11 00:16:35.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:35.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.683 issued rwts: total=2560,2635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.683 job3: (groupid=0, jobs=1): err= 0: pid=87434: Sat Dec 7 08:07:46 2024 00:16:35.683 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:16:35.683 slat (usec): min=6, max=9072, avg=191.00, stdev=778.11 00:16:35.683 clat (usec): min=16228, max=30446, avg=24137.88, stdev=2108.17 00:16:35.683 lat (usec): min=18702, max=30461, avg=24328.88, stdev=1992.43 00:16:35.683 clat percentiles (usec): 00:16:35.683 | 1.00th=[19006], 5.00th=[19792], 10.00th=[21365], 20.00th=[22676], 00:16:35.683 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[24773], 00:16:35.683 | 70.00th=[25035], 80.00th=[25297], 90.00th=[25822], 95.00th=[27132], 00:16:35.683 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540], 00:16:35.683 | 99.99th=[30540] 00:16:35.683 write: IOPS=2674, BW=10.4MiB/s (11.0MB/s)(10.5MiB/1004msec); 0 zone resets 00:16:35.683 slat (usec): min=13, max=6506, avg=181.80, stdev=856.42 00:16:35.683 clat (usec): min=1920, max=30437, avg=23996.68, stdev=3551.48 00:16:35.683 lat (usec): min=6505, max=30458, avg=24178.48, stdev=3460.69 00:16:35.683 clat percentiles (usec): 00:16:35.683 | 1.00th=[ 7373], 5.00th=[18220], 10.00th=[20055], 20.00th=[23462], 00:16:35.683 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:16:35.683 | 70.00th=[24511], 80.00th=[25560], 90.00th=[27919], 95.00th=[29754], 00:16:35.683 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540], 00:16:35.683 | 99.99th=[30540] 00:16:35.683 bw ( KiB/s): min= 8710, max=11775, per=15.25%, avg=10242.50, stdev=2167.28, samples=2 00:16:35.683 iops : min= 2177, max= 2943, avg=2560.00, stdev=541.64, samples=2 00:16:35.683 lat (msec) : 2=0.02%, 10=0.61%, 20=6.94%, 50=92.43% 00:16:35.683 cpu : usr=2.69%, sys=8.47%, ctx=241, majf=0, minf=19 00:16:35.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:35.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.683 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:35.683 issued rwts: total=2560,2685,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:35.683 00:16:35.683 Run status group 0 (all jobs): 00:16:35.683 READ: bw=63.1MiB/s (66.2MB/s), 9.96MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=63.4MiB (66.4MB), run=1002-1004msec 00:16:35.683 WRITE: bw=65.6MiB/s (68.8MB/s), 10.3MiB/s-23.0MiB/s (10.7MB/s-24.1MB/s), io=65.8MiB (69.0MB), run=1002-1004msec 00:16:35.683 00:16:35.683 Disk stats (read/write): 00:16:35.683 nvme0n1: ios=4971/5120, merge=0/0, ticks=16600/15937, in_queue=32537, util=89.38% 00:16:35.683 nvme0n2: ios=4657/5030, merge=0/0, ticks=12080/12689, in_queue=24769, util=89.10% 00:16:35.683 nvme0n3: ios=2048/2442, merge=0/0, ticks=12362/13362, in_queue=25724, util=89.26% 00:16:35.683 nvme0n4: ios=2054/2492, merge=0/0, ticks=12273/13239, in_queue=25512, util=89.93% 00:16:35.683 08:07:46 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:35.683 [global] 00:16:35.683 thread=1 00:16:35.683 invalidate=1 00:16:35.683 rw=randwrite 00:16:35.683 time_based=1 00:16:35.683 runtime=1 00:16:35.683 ioengine=libaio 00:16:35.683 direct=1 00:16:35.683 bs=4096 00:16:35.683 iodepth=128 00:16:35.683 norandommap=0 00:16:35.683 numjobs=1 00:16:35.683 00:16:35.683 verify_dump=1 00:16:35.683 verify_backlog=512 00:16:35.683 verify_state_save=0 00:16:35.683 do_verify=1 00:16:35.683 verify=crc32c-intel 00:16:35.683 [job0] 00:16:35.683 filename=/dev/nvme0n1 00:16:35.683 [job1] 00:16:35.683 filename=/dev/nvme0n2 00:16:35.683 [job2] 00:16:35.683 filename=/dev/nvme0n3 00:16:35.683 [job3] 00:16:35.683 filename=/dev/nvme0n4 00:16:35.683 Could not set queue depth (nvme0n1) 00:16:35.683 Could not set queue depth (nvme0n2) 00:16:35.683 Could not set queue depth (nvme0n3) 00:16:35.683 Could not set queue depth (nvme0n4) 00:16:35.683 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:35.683 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:35.683 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:35.683 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:35.683 fio-3.35 00:16:35.683 Starting 4 threads 00:16:37.104 00:16:37.104 job0: (groupid=0, jobs=1): err= 0: pid=87487: Sat Dec 7 08:07:48 2024 00:16:37.104 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:16:37.104 slat (usec): min=3, max=2755, avg=81.80, stdev=360.83 00:16:37.104 clat (usec): min=8338, max=12952, avg=11009.86, stdev=771.48 00:16:37.104 lat (usec): min=8865, max=14469, avg=11091.66, stdev=699.10 00:16:37.104 clat percentiles (usec): 00:16:37.104 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10683], 00:16:37.104 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:16:37.104 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11863], 95.00th=[11994], 00:16:37.104 | 99.00th=[12387], 99.50th=[12518], 99.90th=[12911], 99.95th=[12911], 00:16:37.104 | 99.99th=[12911] 00:16:37.104 write: IOPS=5890, BW=23.0MiB/s (24.1MB/s)(23.0MiB/1001msec); 0 zone resets 00:16:37.104 slat (usec): min=7, max=2894, avg=84.43, stdev=353.35 00:16:37.104 clat (usec): min=302, max=13456, avg=10930.94, stdev=1340.12 00:16:37.104 lat (usec): min=322, max=13483, avg=11015.36, stdev=1329.81 00:16:37.104 clat percentiles (usec): 00:16:37.104 | 1.00th=[ 5997], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9896], 00:16:37.104 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11076], 60.00th=[11469], 00:16:37.104 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:16:37.104 | 99.00th=[13042], 99.50th=[13304], 99.90th=[13435], 99.95th=[13435], 00:16:37.104 | 99.99th=[13435] 00:16:37.104 bw ( KiB/s): min=24625, max=24625, per=36.20%, avg=24625.00, stdev= 0.00, samples=1 00:16:37.104 iops : min= 6156, max= 6156, avg=6156.00, stdev= 0.00, samples=1 00:16:37.104 lat (usec) : 500=0.03% 00:16:37.104 lat (msec) : 4=0.30%, 10=17.18%, 20=82.49% 00:16:37.104 cpu : usr=5.10%, sys=13.80%, ctx=814, majf=0, minf=12 00:16:37.104 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:37.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.104 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:37.104 issued rwts: total=5632,5896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.104 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:37.104 job1: (groupid=0, jobs=1): err= 0: pid=87488: Sat Dec 7 08:07:48 2024 00:16:37.104 read: IOPS=3092, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1006msec) 00:16:37.104 slat (usec): min=4, max=9843, avg=123.38, stdev=651.26 00:16:37.104 clat (usec): min=4428, max=27439, avg=15264.94, stdev=2849.19 00:16:37.104 lat (usec): min=7191, max=28459, avg=15388.32, stdev=2884.99 00:16:37.104 clat percentiles (usec): 00:16:37.104 | 1.00th=[ 8979], 5.00th=[11863], 10.00th=[12256], 20.00th=[13304], 00:16:37.104 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14877], 60.00th=[15401], 00:16:37.104 | 70.00th=[16319], 80.00th=[17171], 90.00th=[19268], 95.00th=[20841], 00:16:37.104 | 99.00th=[23987], 99.50th=[25035], 99.90th=[27395], 99.95th=[27395], 00:16:37.104 | 99.99th=[27395] 00:16:37.104 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:16:37.104 slat (usec): min=10, max=6998, avg=163.92, stdev=676.94 00:16:37.104 clat (usec): min=8057, max=37025, avg=22189.24, stdev=8130.59 00:16:37.104 lat (usec): min=8083, max=37060, avg=22353.15, stdev=8188.22 00:16:37.104 clat percentiles (usec): 00:16:37.104 | 1.00th=[10945], 5.00th=[12911], 10.00th=[13173], 20.00th=[13698], 00:16:37.104 | 30.00th=[15139], 40.00th=[16712], 50.00th=[22152], 60.00th=[23725], 00:16:37.104 | 70.00th=[27395], 80.00th=[31327], 90.00th=[34866], 95.00th=[35390], 00:16:37.104 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:16:37.104 | 99.99th=[36963] 00:16:37.104 bw ( KiB/s): min=11560, max=16416, per=20.56%, avg=13988.00, stdev=3433.71, samples=2 00:16:37.105 iops : min= 2890, max= 4104, avg=3497.00, stdev=858.43, samples=2 00:16:37.105 lat (msec) : 10=1.00%, 20=66.21%, 50=32.79% 00:16:37.105 cpu : usr=2.49%, sys=11.04%, ctx=406, majf=0, minf=11 00:16:37.105 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:37.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:37.105 issued rwts: total=3111,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.105 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:37.105 job2: (groupid=0, jobs=1): err= 0: pid=87489: Sat Dec 7 08:07:48 2024 00:16:37.105 read: IOPS=4845, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1004msec) 00:16:37.105 slat (usec): min=7, max=4221, avg=96.18, stdev=436.99 00:16:37.105 clat (usec): min=1252, max=15257, avg=12596.06, stdev=1262.08 00:16:37.105 lat (usec): min=3616, max=16941, avg=12692.25, stdev=1197.77 00:16:37.105 clat percentiles (usec): 00:16:37.105 | 1.00th=[ 6980], 5.00th=[10421], 10.00th=[11207], 20.00th=[12256], 00:16:37.105 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12780], 60.00th=[12911], 00:16:37.105 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13829], 95.00th=[14091], 00:16:37.105 | 99.00th=[14484], 99.50th=[14746], 99.90th=[15139], 99.95th=[15139], 00:16:37.105 | 99.99th=[15270] 00:16:37.105 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:16:37.105 slat (usec): min=10, max=3177, avg=96.80, stdev=392.07 00:16:37.105 clat (usec): min=9765, max=15809, avg=12793.88, stdev=1109.63 00:16:37.105 lat (usec): min=9788, max=15830, avg=12890.68, stdev=1077.26 00:16:37.105 clat percentiles (usec): 00:16:37.105 | 1.00th=[10552], 5.00th=[10814], 10.00th=[10945], 20.00th=[11338], 00:16:37.105 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13173], 60.00th=[13304], 00:16:37.105 | 70.00th=[13566], 80.00th=[13698], 90.00th=[13829], 95.00th=[14091], 00:16:37.105 | 99.00th=[14877], 99.50th=[15139], 99.90th=[15795], 99.95th=[15795], 00:16:37.105 | 99.99th=[15795] 00:16:37.105 bw ( KiB/s): min=20439, max=20521, per=30.11%, avg=20480.00, stdev=57.98, samples=2 00:16:37.105 iops : min= 5109, max= 5130, avg=5119.50, stdev=14.85, samples=2 00:16:37.105 lat (msec) : 2=0.01%, 4=0.24%, 10=0.86%, 20=98.89% 00:16:37.105 cpu : usr=4.29%, sys=13.36%, ctx=724, majf=0, minf=5 00:16:37.105 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:37.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:37.105 issued rwts: total=4865,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.105 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:37.105 job3: (groupid=0, jobs=1): err= 0: pid=87490: Sat Dec 7 08:07:48 2024 00:16:37.105 read: IOPS=2157, BW=8630KiB/s (8837kB/s)(8708KiB/1009msec) 00:16:37.105 slat (usec): min=7, max=8736, avg=265.75, stdev=1071.36 00:16:37.105 clat (usec): min=6246, max=56437, avg=33311.55, stdev=10785.57 00:16:37.105 lat (usec): min=9581, max=56454, avg=33577.29, stdev=10814.00 00:16:37.105 clat percentiles (usec): 00:16:37.105 | 1.00th=[ 9896], 5.00th=[19268], 10.00th=[21103], 20.00th=[23200], 00:16:37.105 | 30.00th=[23987], 40.00th=[27395], 50.00th=[32637], 60.00th=[37487], 00:16:37.105 | 70.00th=[41157], 80.00th=[43779], 90.00th=[47449], 95.00th=[50070], 00:16:37.105 | 99.00th=[54264], 99.50th=[54264], 99.90th=[56361], 99.95th=[56361], 00:16:37.105 | 99.99th=[56361] 00:16:37.105 write: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec); 0 zone resets 00:16:37.105 slat (usec): min=17, max=7854, avg=157.13, stdev=760.82 00:16:37.105 clat (usec): min=13624, max=51544, avg=21236.45, stdev=6699.19 00:16:37.105 lat (usec): min=15892, max=51569, avg=21393.58, stdev=6695.69 00:16:37.105 clat percentiles (usec): 00:16:37.105 | 1.00th=[14222], 5.00th=[16909], 10.00th=[17171], 20.00th=[17433], 00:16:37.105 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18482], 00:16:37.105 | 70.00th=[20055], 80.00th=[27657], 90.00th=[32113], 95.00th=[33424], 00:16:37.105 | 99.00th=[49546], 99.50th=[49546], 99.90th=[51643], 99.95th=[51643], 00:16:37.105 | 99.99th=[51643] 00:16:37.105 bw ( KiB/s): min= 8192, max=12263, per=15.03%, avg=10227.50, stdev=2878.63, samples=2 00:16:37.105 iops : min= 2048, max= 3065, avg=2556.50, stdev=719.13, samples=2 00:16:37.105 lat (msec) : 10=0.51%, 20=39.14%, 50=57.80%, 100=2.55% 00:16:37.105 cpu : usr=2.28%, sys=7.44%, ctx=218, majf=0, minf=11 00:16:37.105 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:16:37.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:37.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:37.105 issued rwts: total=2177,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:37.105 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:37.105 00:16:37.105 Run status group 0 (all jobs): 00:16:37.105 READ: bw=61.1MiB/s (64.1MB/s), 8630KiB/s-22.0MiB/s (8837kB/s-23.0MB/s), io=61.7MiB (64.7MB), run=1001-1009msec 00:16:37.105 WRITE: bw=66.4MiB/s (69.7MB/s), 9.91MiB/s-23.0MiB/s (10.4MB/s-24.1MB/s), io=67.0MiB (70.3MB), run=1001-1009msec 00:16:37.105 00:16:37.105 Disk stats (read/write): 00:16:37.105 nvme0n1: ios=4859/5120, merge=0/0, ticks=12486/11950, in_queue=24436, util=88.67% 00:16:37.105 nvme0n2: ios=2899/3072, merge=0/0, ticks=21038/30219, in_queue=51257, util=88.46% 00:16:37.105 nvme0n3: ios=4096/4493, merge=0/0, ticks=12424/12386, in_queue=24810, util=89.19% 00:16:37.105 nvme0n4: ios=1856/2048, merge=0/0, ticks=16109/9751, in_queue=25860, util=89.75% 00:16:37.105 08:07:48 -- target/fio.sh@55 -- # sync 00:16:37.105 08:07:48 -- target/fio.sh@59 -- # fio_pid=87503 00:16:37.105 08:07:48 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:37.105 08:07:48 -- target/fio.sh@61 -- # sleep 3 00:16:37.105 [global] 00:16:37.105 thread=1 00:16:37.105 invalidate=1 00:16:37.105 rw=read 00:16:37.105 time_based=1 00:16:37.105 runtime=10 00:16:37.105 ioengine=libaio 00:16:37.105 direct=1 00:16:37.105 bs=4096 00:16:37.105 iodepth=1 00:16:37.105 norandommap=1 00:16:37.105 numjobs=1 00:16:37.105 00:16:37.105 [job0] 00:16:37.105 filename=/dev/nvme0n1 00:16:37.105 [job1] 00:16:37.105 filename=/dev/nvme0n2 00:16:37.105 [job2] 00:16:37.105 filename=/dev/nvme0n3 00:16:37.105 [job3] 00:16:37.105 filename=/dev/nvme0n4 00:16:37.105 Could not set queue depth (nvme0n1) 00:16:37.105 Could not set queue depth (nvme0n2) 00:16:37.105 Could not set queue depth (nvme0n3) 00:16:37.105 Could not set queue depth (nvme0n4) 00:16:37.105 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:37.105 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:37.105 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:37.105 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:37.105 fio-3.35 00:16:37.105 Starting 4 threads 00:16:40.389 08:07:51 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:40.389 fio: pid=87556, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:40.389 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=42811392, buflen=4096 00:16:40.389 08:07:51 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:40.647 fio: pid=87555, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:40.647 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=56193024, buflen=4096 00:16:40.647 08:07:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:40.647 08:07:51 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:40.647 fio: pid=87553, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:40.647 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=53817344, buflen=4096 00:16:40.905 08:07:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:40.905 08:07:51 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:40.905 fio: pid=87554, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:16:40.905 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1003520, buflen=4096 00:16:41.163 00:16:41.163 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87553: Sat Dec 7 08:07:52 2024 00:16:41.163 read: IOPS=3845, BW=15.0MiB/s (15.7MB/s)(51.3MiB/3417msec) 00:16:41.163 slat (usec): min=9, max=13001, avg=15.60, stdev=177.29 00:16:41.163 clat (usec): min=103, max=3063, avg=243.35, stdev=65.60 00:16:41.163 lat (usec): min=138, max=13194, avg=258.95, stdev=188.31 00:16:41.163 clat percentiles (usec): 00:16:41.163 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 147], 20.00th=[ 180], 00:16:41.163 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 265], 00:16:41.163 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 302], 00:16:41.163 | 99.00th=[ 330], 99.50th=[ 363], 99.90th=[ 486], 99.95th=[ 693], 00:16:41.163 | 99.99th=[ 3032] 00:16:41.163 bw ( KiB/s): min=13760, max=14592, per=24.39%, avg=14310.67, stdev=372.10, samples=6 00:16:41.163 iops : min= 3440, max= 3648, avg=3577.67, stdev=93.02, samples=6 00:16:41.163 lat (usec) : 250=39.92%, 500=59.97%, 750=0.05%, 1000=0.02% 00:16:41.163 lat (msec) : 2=0.01%, 4=0.02% 00:16:41.163 cpu : usr=0.73%, sys=4.54%, ctx=13171, majf=0, minf=1 00:16:41.163 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:41.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.163 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.163 issued rwts: total=13140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.163 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:41.163 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87554: Sat Dec 7 08:07:52 2024 00:16:41.163 read: IOPS=4521, BW=17.7MiB/s (18.5MB/s)(65.0MiB/3678msec) 00:16:41.163 slat (usec): min=9, max=14789, avg=17.69, stdev=200.10 00:16:41.163 clat (usec): min=3, max=30336, avg=202.08, stdev=255.75 00:16:41.163 lat (usec): min=128, max=30349, avg=219.77, stdev=324.45 00:16:41.163 clat percentiles (usec): 00:16:41.163 | 1.00th=[ 123], 5.00th=[ 129], 10.00th=[ 135], 20.00th=[ 143], 00:16:41.163 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 172], 60.00th=[ 235], 00:16:41.163 | 70.00th=[ 245], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 285], 00:16:41.163 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 881], 99.95th=[ 1926], 00:16:41.163 | 99.99th=[ 7439] 00:16:41.163 bw ( KiB/s): min=14232, max=23576, per=30.52%, avg=17901.43, stdev=4295.66, samples=7 00:16:41.163 iops : min= 3558, max= 5894, avg=4475.29, stdev=1073.87, samples=7 00:16:41.163 lat (usec) : 4=0.01%, 250=73.72%, 500=26.06%, 750=0.10%, 1000=0.01% 00:16:41.163 lat (msec) : 2=0.05%, 4=0.02%, 10=0.02%, 50=0.01% 00:16:41.163 cpu : usr=1.14%, sys=5.52%, ctx=16651, majf=0, minf=2 00:16:41.163 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:41.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.163 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.163 issued rwts: total=16630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.163 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:41.163 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87555: Sat Dec 7 08:07:52 2024 00:16:41.163 read: IOPS=4321, BW=16.9MiB/s (17.7MB/s)(53.6MiB/3175msec) 00:16:41.163 slat (usec): min=10, max=11801, avg=15.35, stdev=119.40 00:16:41.163 clat (usec): min=127, max=3556, avg=214.59, stdev=66.73 00:16:41.163 lat (usec): min=145, max=12042, avg=229.94, stdev=136.66 00:16:41.163 clat percentiles (usec): 00:16:41.163 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 157], 00:16:41.163 | 30.00th=[ 165], 40.00th=[ 178], 50.00th=[ 233], 60.00th=[ 241], 00:16:41.163 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 289], 00:16:41.163 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 562], 99.95th=[ 848], 00:16:41.163 | 99.99th=[ 2409] 00:16:41.163 bw ( KiB/s): min=14584, max=22808, per=29.81%, avg=17489.33, stdev=3960.07, samples=6 00:16:41.163 iops : min= 3646, max= 5702, avg=4372.33, stdev=990.02, samples=6 00:16:41.163 lat (usec) : 250=68.79%, 500=31.03%, 750=0.12%, 1000=0.01% 00:16:41.163 lat (msec) : 2=0.02%, 4=0.01% 00:16:41.163 cpu : usr=1.29%, sys=5.04%, ctx=13742, majf=0, minf=1 00:16:41.163 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:41.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.163 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.163 issued rwts: total=13720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.163 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:41.163 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87556: Sat Dec 7 08:07:52 2024 00:16:41.163 read: IOPS=3565, BW=13.9MiB/s (14.6MB/s)(40.8MiB/2932msec) 00:16:41.163 slat (nsec): min=9660, max=77035, avg=12462.51, stdev=3765.36 00:16:41.163 clat (usec): min=159, max=1900, avg=266.71, stdev=31.08 00:16:41.163 lat (usec): min=169, max=1912, avg=279.17, stdev=31.40 00:16:41.163 clat percentiles (usec): 00:16:41.163 | 1.00th=[ 223], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:16:41.163 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:16:41.163 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 306], 00:16:41.163 | 99.00th=[ 338], 99.50th=[ 363], 99.90th=[ 429], 99.95th=[ 611], 00:16:41.164 | 99.99th=[ 1004] 00:16:41.164 bw ( KiB/s): min=13768, max=14568, per=24.30%, avg=14254.40, stdev=382.76, samples=5 00:16:41.164 iops : min= 3442, max= 3642, avg=3563.60, stdev=95.69, samples=5 00:16:41.164 lat (usec) : 250=25.09%, 500=74.83%, 750=0.03%, 1000=0.02% 00:16:41.164 lat (msec) : 2=0.02% 00:16:41.164 cpu : usr=0.75%, sys=4.20%, ctx=10467, majf=0, minf=2 00:16:41.164 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:41.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.164 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.164 issued rwts: total=10453,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.164 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:41.164 00:16:41.164 Run status group 0 (all jobs): 00:16:41.164 READ: bw=57.3MiB/s (60.1MB/s), 13.9MiB/s-17.7MiB/s (14.6MB/s-18.5MB/s), io=211MiB (221MB), run=2932-3678msec 00:16:41.164 00:16:41.164 Disk stats (read/write): 00:16:41.164 nvme0n1: ios=12833/0, merge=0/0, ticks=3150/0, in_queue=3150, util=95.31% 00:16:41.164 nvme0n2: ios=16239/0, merge=0/0, ticks=3317/0, in_queue=3317, util=95.16% 00:16:41.164 nvme0n3: ios=13518/0, merge=0/0, ticks=2903/0, in_queue=2903, util=96.30% 00:16:41.164 nvme0n4: ios=10235/0, merge=0/0, ticks=2720/0, in_queue=2720, util=96.79% 00:16:41.164 08:07:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:41.164 08:07:52 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:41.422 08:07:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:41.422 08:07:52 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:41.679 08:07:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:41.679 08:07:52 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:41.935 08:07:53 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:41.935 08:07:53 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:42.193 08:07:53 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:42.193 08:07:53 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:42.451 08:07:53 -- target/fio.sh@69 -- # fio_status=0 00:16:42.451 08:07:53 -- target/fio.sh@70 -- # wait 87503 00:16:42.451 08:07:53 -- target/fio.sh@70 -- # fio_status=4 00:16:42.451 08:07:53 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:42.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.451 08:07:53 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:42.451 08:07:53 -- common/autotest_common.sh@1208 -- # local i=0 00:16:42.451 08:07:53 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:42.451 08:07:53 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:42.451 08:07:53 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:42.451 08:07:53 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:42.451 nvmf hotplug test: fio failed as expected 00:16:42.451 08:07:53 -- common/autotest_common.sh@1220 -- # return 0 00:16:42.451 08:07:53 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:42.451 08:07:53 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:42.451 08:07:53 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:42.708 08:07:53 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:42.708 08:07:53 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:42.708 08:07:53 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:42.708 08:07:53 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:42.708 08:07:53 -- target/fio.sh@91 -- # nvmftestfini 00:16:42.708 08:07:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:42.708 08:07:53 -- nvmf/common.sh@116 -- # sync 00:16:42.708 08:07:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:42.708 08:07:53 -- nvmf/common.sh@119 -- # set +e 00:16:42.708 08:07:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:42.708 08:07:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:42.709 rmmod nvme_tcp 00:16:42.709 rmmod nvme_fabrics 00:16:42.709 rmmod nvme_keyring 00:16:42.967 08:07:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:42.967 08:07:54 -- nvmf/common.sh@123 -- # set -e 00:16:42.967 08:07:54 -- nvmf/common.sh@124 -- # return 0 00:16:42.967 08:07:54 -- nvmf/common.sh@477 -- # '[' -n 87010 ']' 00:16:42.967 08:07:54 -- nvmf/common.sh@478 -- # killprocess 87010 00:16:42.967 08:07:54 -- common/autotest_common.sh@936 -- # '[' -z 87010 ']' 00:16:42.967 08:07:54 -- common/autotest_common.sh@940 -- # kill -0 87010 00:16:42.967 08:07:54 -- common/autotest_common.sh@941 -- # uname 00:16:42.967 08:07:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.967 08:07:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87010 00:16:42.967 killing process with pid 87010 00:16:42.967 08:07:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:42.967 08:07:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:42.967 08:07:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87010' 00:16:42.967 08:07:54 -- common/autotest_common.sh@955 -- # kill 87010 00:16:42.967 08:07:54 -- common/autotest_common.sh@960 -- # wait 87010 00:16:42.967 08:07:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:42.967 08:07:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:42.967 08:07:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:42.967 08:07:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:42.967 08:07:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:42.967 08:07:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.967 08:07:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.967 08:07:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.225 08:07:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:43.225 00:16:43.225 real 0m19.879s 00:16:43.225 user 1m16.459s 00:16:43.225 sys 0m8.793s 00:16:43.225 08:07:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:43.225 08:07:54 -- common/autotest_common.sh@10 -- # set +x 00:16:43.225 ************************************ 00:16:43.225 END TEST nvmf_fio_target 00:16:43.225 ************************************ 00:16:43.225 08:07:54 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:43.225 08:07:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:43.225 08:07:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:43.225 08:07:54 -- common/autotest_common.sh@10 -- # set +x 00:16:43.225 ************************************ 00:16:43.225 START TEST nvmf_bdevio 00:16:43.225 ************************************ 00:16:43.225 08:07:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:43.225 * Looking for test storage... 00:16:43.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:43.225 08:07:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:43.225 08:07:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:43.225 08:07:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:43.225 08:07:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:43.225 08:07:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:43.225 08:07:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:43.225 08:07:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:43.225 08:07:54 -- scripts/common.sh@335 -- # IFS=.-: 00:16:43.225 08:07:54 -- scripts/common.sh@335 -- # read -ra ver1 00:16:43.225 08:07:54 -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.225 08:07:54 -- scripts/common.sh@336 -- # read -ra ver2 00:16:43.225 08:07:54 -- scripts/common.sh@337 -- # local 'op=<' 00:16:43.225 08:07:54 -- scripts/common.sh@339 -- # ver1_l=2 00:16:43.225 08:07:54 -- scripts/common.sh@340 -- # ver2_l=1 00:16:43.225 08:07:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:43.225 08:07:54 -- scripts/common.sh@343 -- # case "$op" in 00:16:43.225 08:07:54 -- scripts/common.sh@344 -- # : 1 00:16:43.225 08:07:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:43.225 08:07:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.225 08:07:54 -- scripts/common.sh@364 -- # decimal 1 00:16:43.225 08:07:54 -- scripts/common.sh@352 -- # local d=1 00:16:43.225 08:07:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.225 08:07:54 -- scripts/common.sh@354 -- # echo 1 00:16:43.225 08:07:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:43.225 08:07:54 -- scripts/common.sh@365 -- # decimal 2 00:16:43.225 08:07:54 -- scripts/common.sh@352 -- # local d=2 00:16:43.225 08:07:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.225 08:07:54 -- scripts/common.sh@354 -- # echo 2 00:16:43.225 08:07:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:43.225 08:07:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:43.225 08:07:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:43.225 08:07:54 -- scripts/common.sh@367 -- # return 0 00:16:43.225 08:07:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.225 08:07:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:43.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.225 --rc genhtml_branch_coverage=1 00:16:43.225 --rc genhtml_function_coverage=1 00:16:43.225 --rc genhtml_legend=1 00:16:43.225 --rc geninfo_all_blocks=1 00:16:43.225 --rc geninfo_unexecuted_blocks=1 00:16:43.225 00:16:43.225 ' 00:16:43.225 08:07:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:43.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.225 --rc genhtml_branch_coverage=1 00:16:43.225 --rc genhtml_function_coverage=1 00:16:43.225 --rc genhtml_legend=1 00:16:43.225 --rc geninfo_all_blocks=1 00:16:43.225 --rc geninfo_unexecuted_blocks=1 00:16:43.225 00:16:43.225 ' 00:16:43.225 08:07:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:43.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.225 --rc genhtml_branch_coverage=1 00:16:43.225 --rc genhtml_function_coverage=1 00:16:43.225 --rc genhtml_legend=1 00:16:43.225 --rc geninfo_all_blocks=1 00:16:43.225 --rc geninfo_unexecuted_blocks=1 00:16:43.225 00:16:43.225 ' 00:16:43.225 08:07:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:43.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.225 --rc genhtml_branch_coverage=1 00:16:43.225 --rc genhtml_function_coverage=1 00:16:43.225 --rc genhtml_legend=1 00:16:43.225 --rc geninfo_all_blocks=1 00:16:43.225 --rc geninfo_unexecuted_blocks=1 00:16:43.225 00:16:43.225 ' 00:16:43.225 08:07:54 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:43.225 08:07:54 -- nvmf/common.sh@7 -- # uname -s 00:16:43.225 08:07:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.225 08:07:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.225 08:07:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.225 08:07:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.225 08:07:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.225 08:07:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.225 08:07:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.225 08:07:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.225 08:07:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.225 08:07:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.225 08:07:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:16:43.225 08:07:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:16:43.225 08:07:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.225 08:07:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.225 08:07:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:43.225 08:07:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:43.225 08:07:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.225 08:07:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.225 08:07:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.225 08:07:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.226 08:07:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.226 08:07:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.226 08:07:54 -- paths/export.sh@5 -- # export PATH 00:16:43.226 08:07:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.226 08:07:54 -- nvmf/common.sh@46 -- # : 0 00:16:43.226 08:07:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:43.226 08:07:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:43.226 08:07:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:43.226 08:07:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.226 08:07:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.226 08:07:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:43.226 08:07:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:43.226 08:07:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:43.484 08:07:54 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:43.484 08:07:54 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:43.484 08:07:54 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:43.484 08:07:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:43.484 08:07:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.484 08:07:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:43.484 08:07:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:43.484 08:07:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:43.484 08:07:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.484 08:07:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.484 08:07:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.484 08:07:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:43.484 08:07:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:43.484 08:07:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:43.484 08:07:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:43.484 08:07:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:43.484 08:07:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:43.484 08:07:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.484 08:07:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.484 08:07:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:43.484 08:07:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:43.484 08:07:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:43.484 08:07:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:43.484 08:07:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:43.484 08:07:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.484 08:07:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:43.484 08:07:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:43.484 08:07:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:43.484 08:07:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:43.484 08:07:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:43.484 08:07:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:43.484 Cannot find device "nvmf_tgt_br" 00:16:43.484 08:07:54 -- nvmf/common.sh@154 -- # true 00:16:43.484 08:07:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:43.484 Cannot find device "nvmf_tgt_br2" 00:16:43.484 08:07:54 -- nvmf/common.sh@155 -- # true 00:16:43.484 08:07:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:43.484 08:07:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:43.484 Cannot find device "nvmf_tgt_br" 00:16:43.484 08:07:54 -- nvmf/common.sh@157 -- # true 00:16:43.484 08:07:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:43.484 Cannot find device "nvmf_tgt_br2" 00:16:43.484 08:07:54 -- nvmf/common.sh@158 -- # true 00:16:43.484 08:07:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:43.484 08:07:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:43.484 08:07:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:43.484 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.484 08:07:54 -- nvmf/common.sh@161 -- # true 00:16:43.484 08:07:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:43.484 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.484 08:07:54 -- nvmf/common.sh@162 -- # true 00:16:43.484 08:07:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:43.484 08:07:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:43.484 08:07:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:43.484 08:07:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:43.484 08:07:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:43.484 08:07:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:43.484 08:07:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:43.484 08:07:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:43.484 08:07:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:43.484 08:07:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:43.484 08:07:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:43.484 08:07:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:43.484 08:07:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:43.484 08:07:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:43.484 08:07:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:43.484 08:07:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:43.484 08:07:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:43.484 08:07:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:43.484 08:07:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:43.484 08:07:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:43.484 08:07:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:43.742 08:07:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:43.742 08:07:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:43.742 08:07:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:43.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:16:43.742 00:16:43.742 --- 10.0.0.2 ping statistics --- 00:16:43.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.742 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:16:43.742 08:07:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:43.742 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:43.742 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:16:43.742 00:16:43.742 --- 10.0.0.3 ping statistics --- 00:16:43.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.742 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:43.742 08:07:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:43.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:16:43.742 00:16:43.742 --- 10.0.0.1 ping statistics --- 00:16:43.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.742 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:16:43.743 08:07:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.743 08:07:54 -- nvmf/common.sh@421 -- # return 0 00:16:43.743 08:07:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:43.743 08:07:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.743 08:07:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:43.743 08:07:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:43.743 08:07:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.743 08:07:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:43.743 08:07:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:43.743 08:07:54 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:43.743 08:07:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:43.743 08:07:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:43.743 08:07:54 -- common/autotest_common.sh@10 -- # set +x 00:16:43.743 08:07:54 -- nvmf/common.sh@469 -- # nvmfpid=87883 00:16:43.743 08:07:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:43.743 08:07:54 -- nvmf/common.sh@470 -- # waitforlisten 87883 00:16:43.743 08:07:54 -- common/autotest_common.sh@829 -- # '[' -z 87883 ']' 00:16:43.743 08:07:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.743 08:07:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:43.743 08:07:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.743 08:07:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:43.743 08:07:54 -- common/autotest_common.sh@10 -- # set +x 00:16:43.743 [2024-12-07 08:07:54.866042] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:43.743 [2024-12-07 08:07:54.866122] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.743 [2024-12-07 08:07:55.001465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:44.000 [2024-12-07 08:07:55.067873] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:44.000 [2024-12-07 08:07:55.067999] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.000 [2024-12-07 08:07:55.068011] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.000 [2024-12-07 08:07:55.068018] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.000 [2024-12-07 08:07:55.069141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:44.000 [2024-12-07 08:07:55.069272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:44.000 [2024-12-07 08:07:55.069422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:44.000 [2024-12-07 08:07:55.069426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:44.566 08:07:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:44.566 08:07:55 -- common/autotest_common.sh@862 -- # return 0 00:16:44.566 08:07:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:44.566 08:07:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:44.566 08:07:55 -- common/autotest_common.sh@10 -- # set +x 00:16:44.824 08:07:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.824 08:07:55 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:44.824 08:07:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.824 08:07:55 -- common/autotest_common.sh@10 -- # set +x 00:16:44.824 [2024-12-07 08:07:55.858998] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.824 08:07:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.824 08:07:55 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:44.824 08:07:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.824 08:07:55 -- common/autotest_common.sh@10 -- # set +x 00:16:44.824 Malloc0 00:16:44.824 08:07:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.824 08:07:55 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:44.824 08:07:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.824 08:07:55 -- common/autotest_common.sh@10 -- # set +x 00:16:44.824 08:07:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.824 08:07:55 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:44.824 08:07:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.824 08:07:55 -- common/autotest_common.sh@10 -- # set +x 00:16:44.824 08:07:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.824 08:07:55 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:44.824 08:07:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.824 08:07:55 -- common/autotest_common.sh@10 -- # set +x 00:16:44.824 [2024-12-07 08:07:55.929493] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.824 08:07:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.824 08:07:55 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:44.825 08:07:55 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:44.825 08:07:55 -- nvmf/common.sh@520 -- # config=() 00:16:44.825 08:07:55 -- nvmf/common.sh@520 -- # local subsystem config 00:16:44.825 08:07:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:44.825 08:07:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:44.825 { 00:16:44.825 "params": { 00:16:44.825 "name": "Nvme$subsystem", 00:16:44.825 "trtype": "$TEST_TRANSPORT", 00:16:44.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:44.825 "adrfam": "ipv4", 00:16:44.825 "trsvcid": "$NVMF_PORT", 00:16:44.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:44.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:44.825 "hdgst": ${hdgst:-false}, 00:16:44.825 "ddgst": ${ddgst:-false} 00:16:44.825 }, 00:16:44.825 "method": "bdev_nvme_attach_controller" 00:16:44.825 } 00:16:44.825 EOF 00:16:44.825 )") 00:16:44.825 08:07:55 -- nvmf/common.sh@542 -- # cat 00:16:44.825 08:07:55 -- nvmf/common.sh@544 -- # jq . 00:16:44.825 08:07:55 -- nvmf/common.sh@545 -- # IFS=, 00:16:44.825 08:07:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:44.825 "params": { 00:16:44.825 "name": "Nvme1", 00:16:44.825 "trtype": "tcp", 00:16:44.825 "traddr": "10.0.0.2", 00:16:44.825 "adrfam": "ipv4", 00:16:44.825 "trsvcid": "4420", 00:16:44.825 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.825 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:44.825 "hdgst": false, 00:16:44.825 "ddgst": false 00:16:44.825 }, 00:16:44.825 "method": "bdev_nvme_attach_controller" 00:16:44.825 }' 00:16:44.825 [2024-12-07 08:07:55.984635] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:44.825 [2024-12-07 08:07:55.984732] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87937 ] 00:16:45.083 [2024-12-07 08:07:56.128514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:45.083 [2024-12-07 08:07:56.208024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.083 [2024-12-07 08:07:56.208166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.083 [2024-12-07 08:07:56.208443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.340 [2024-12-07 08:07:56.377555] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:45.340 [2024-12-07 08:07:56.377855] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:45.340 I/O targets: 00:16:45.340 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:45.340 00:16:45.340 00:16:45.340 CUnit - A unit testing framework for C - Version 2.1-3 00:16:45.340 http://cunit.sourceforge.net/ 00:16:45.340 00:16:45.340 00:16:45.340 Suite: bdevio tests on: Nvme1n1 00:16:45.340 Test: blockdev write read block ...passed 00:16:45.340 Test: blockdev write zeroes read block ...passed 00:16:45.340 Test: blockdev write zeroes read no split ...passed 00:16:45.340 Test: blockdev write zeroes read split ...passed 00:16:45.340 Test: blockdev write zeroes read split partial ...passed 00:16:45.340 Test: blockdev reset ...[2024-12-07 08:07:56.491985] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:45.340 [2024-12-07 08:07:56.492192] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1eed0 (9): Bad file descriptor 00:16:45.340 [2024-12-07 08:07:56.505825] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:45.340 passed 00:16:45.340 Test: blockdev write read 8 blocks ...passed 00:16:45.340 Test: blockdev write read size > 128k ...passed 00:16:45.340 Test: blockdev write read invalid size ...passed 00:16:45.340 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:45.340 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:45.340 Test: blockdev write read max offset ...passed 00:16:45.599 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:45.599 Test: blockdev writev readv 8 blocks ...passed 00:16:45.599 Test: blockdev writev readv 30 x 1block ...passed 00:16:45.599 Test: blockdev writev readv block ...passed 00:16:45.599 Test: blockdev writev readv size > 128k ...passed 00:16:45.599 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:45.599 Test: blockdev comparev and writev ...[2024-12-07 08:07:56.678668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:45.599 [2024-12-07 08:07:56.678835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:45.599 [2024-12-07 08:07:56.678931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:45.599 [2024-12-07 08:07:56.679015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:45.599 [2024-12-07 08:07:56.679441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:45.599 [2024-12-07 08:07:56.679549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:45.599 [2024-12-07 08:07:56.679634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:45.599 [2024-12-07 08:07:56.679718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:45.599 [2024-12-07 08:07:56.680164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:45.599 [2024-12-07 08:07:56.680285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:45.599 [2024-12-07 08:07:56.680371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:45.599 [2024-12-07 08:07:56.680446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:45.599 [2024-12-07 08:07:56.680798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:45.599 [2024-12-07 08:07:56.680886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:45.599 [2024-12-07 08:07:56.680970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:45.599 [2024-12-07 08:07:56.681039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:45.599 passed 00:16:45.599 Test: blockdev nvme passthru rw ...passed 00:16:45.599 Test: blockdev nvme passthru vendor specific ...[2024-12-07 08:07:56.763513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:45.599 [2024-12-07 08:07:56.763680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:45.599 [2024-12-07 08:07:56.763883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:45.599 [2024-12-07 08:07:56.763987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:45.599 [2024-12-07 08:07:56.764175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:45.599 [2024-12-07 08:07:56.764293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:45.599 [2024-12-07 08:07:56.764473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:45.599 [2024-12-07 08:07:56.764573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:45.599 passed 00:16:45.599 Test: blockdev nvme admin passthru ...passed 00:16:45.599 Test: blockdev copy ...passed 00:16:45.599 00:16:45.599 Run Summary: Type Total Ran Passed Failed Inactive 00:16:45.599 suites 1 1 n/a 0 0 00:16:45.599 tests 23 23 23 0 0 00:16:45.599 asserts 152 152 152 0 n/a 00:16:45.599 00:16:45.599 Elapsed time = 0.888 seconds 00:16:45.858 08:07:56 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.858 08:07:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.858 08:07:56 -- common/autotest_common.sh@10 -- # set +x 00:16:45.858 08:07:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.858 08:07:57 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:45.858 08:07:57 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:45.858 08:07:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:45.858 08:07:57 -- nvmf/common.sh@116 -- # sync 00:16:45.858 08:07:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:45.858 08:07:57 -- nvmf/common.sh@119 -- # set +e 00:16:45.858 08:07:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:45.858 08:07:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:45.858 rmmod nvme_tcp 00:16:45.858 rmmod nvme_fabrics 00:16:45.858 rmmod nvme_keyring 00:16:45.858 08:07:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:45.858 08:07:57 -- nvmf/common.sh@123 -- # set -e 00:16:45.858 08:07:57 -- nvmf/common.sh@124 -- # return 0 00:16:45.858 08:07:57 -- nvmf/common.sh@477 -- # '[' -n 87883 ']' 00:16:45.858 08:07:57 -- nvmf/common.sh@478 -- # killprocess 87883 00:16:45.858 08:07:57 -- common/autotest_common.sh@936 -- # '[' -z 87883 ']' 00:16:45.858 08:07:57 -- common/autotest_common.sh@940 -- # kill -0 87883 00:16:46.117 08:07:57 -- common/autotest_common.sh@941 -- # uname 00:16:46.117 08:07:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:46.117 08:07:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87883 00:16:46.117 08:07:57 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:46.117 08:07:57 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:46.117 08:07:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87883' 00:16:46.117 killing process with pid 87883 00:16:46.117 08:07:57 -- common/autotest_common.sh@955 -- # kill 87883 00:16:46.117 08:07:57 -- common/autotest_common.sh@960 -- # wait 87883 00:16:46.376 08:07:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:46.376 08:07:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:46.376 08:07:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:46.376 08:07:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:46.376 08:07:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:46.376 08:07:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.376 08:07:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.376 08:07:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.376 08:07:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:46.376 00:16:46.376 real 0m3.127s 00:16:46.376 user 0m11.244s 00:16:46.376 sys 0m0.779s 00:16:46.376 08:07:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:46.376 08:07:57 -- common/autotest_common.sh@10 -- # set +x 00:16:46.376 ************************************ 00:16:46.376 END TEST nvmf_bdevio 00:16:46.376 ************************************ 00:16:46.376 08:07:57 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:46.376 08:07:57 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:46.376 08:07:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:46.376 08:07:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:46.376 08:07:57 -- common/autotest_common.sh@10 -- # set +x 00:16:46.376 ************************************ 00:16:46.376 START TEST nvmf_bdevio_no_huge 00:16:46.376 ************************************ 00:16:46.376 08:07:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:46.376 * Looking for test storage... 00:16:46.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:46.376 08:07:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:46.376 08:07:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:46.376 08:07:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:46.376 08:07:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:46.376 08:07:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:46.376 08:07:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:46.376 08:07:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:46.376 08:07:57 -- scripts/common.sh@335 -- # IFS=.-: 00:16:46.376 08:07:57 -- scripts/common.sh@335 -- # read -ra ver1 00:16:46.376 08:07:57 -- scripts/common.sh@336 -- # IFS=.-: 00:16:46.376 08:07:57 -- scripts/common.sh@336 -- # read -ra ver2 00:16:46.376 08:07:57 -- scripts/common.sh@337 -- # local 'op=<' 00:16:46.376 08:07:57 -- scripts/common.sh@339 -- # ver1_l=2 00:16:46.376 08:07:57 -- scripts/common.sh@340 -- # ver2_l=1 00:16:46.376 08:07:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:46.376 08:07:57 -- scripts/common.sh@343 -- # case "$op" in 00:16:46.376 08:07:57 -- scripts/common.sh@344 -- # : 1 00:16:46.376 08:07:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:46.376 08:07:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:46.376 08:07:57 -- scripts/common.sh@364 -- # decimal 1 00:16:46.376 08:07:57 -- scripts/common.sh@352 -- # local d=1 00:16:46.376 08:07:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:46.376 08:07:57 -- scripts/common.sh@354 -- # echo 1 00:16:46.376 08:07:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:46.376 08:07:57 -- scripts/common.sh@365 -- # decimal 2 00:16:46.376 08:07:57 -- scripts/common.sh@352 -- # local d=2 00:16:46.376 08:07:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:46.376 08:07:57 -- scripts/common.sh@354 -- # echo 2 00:16:46.376 08:07:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:46.376 08:07:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:46.376 08:07:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:46.376 08:07:57 -- scripts/common.sh@367 -- # return 0 00:16:46.376 08:07:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:46.376 08:07:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:46.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.376 --rc genhtml_branch_coverage=1 00:16:46.376 --rc genhtml_function_coverage=1 00:16:46.376 --rc genhtml_legend=1 00:16:46.376 --rc geninfo_all_blocks=1 00:16:46.376 --rc geninfo_unexecuted_blocks=1 00:16:46.376 00:16:46.376 ' 00:16:46.376 08:07:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:46.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.376 --rc genhtml_branch_coverage=1 00:16:46.376 --rc genhtml_function_coverage=1 00:16:46.376 --rc genhtml_legend=1 00:16:46.376 --rc geninfo_all_blocks=1 00:16:46.376 --rc geninfo_unexecuted_blocks=1 00:16:46.376 00:16:46.376 ' 00:16:46.376 08:07:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:46.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.376 --rc genhtml_branch_coverage=1 00:16:46.376 --rc genhtml_function_coverage=1 00:16:46.376 --rc genhtml_legend=1 00:16:46.376 --rc geninfo_all_blocks=1 00:16:46.376 --rc geninfo_unexecuted_blocks=1 00:16:46.376 00:16:46.376 ' 00:16:46.376 08:07:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:46.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.376 --rc genhtml_branch_coverage=1 00:16:46.376 --rc genhtml_function_coverage=1 00:16:46.376 --rc genhtml_legend=1 00:16:46.376 --rc geninfo_all_blocks=1 00:16:46.376 --rc geninfo_unexecuted_blocks=1 00:16:46.376 00:16:46.376 ' 00:16:46.635 08:07:57 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:46.635 08:07:57 -- nvmf/common.sh@7 -- # uname -s 00:16:46.635 08:07:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.635 08:07:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.635 08:07:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.635 08:07:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.635 08:07:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.635 08:07:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.635 08:07:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.635 08:07:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.635 08:07:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.635 08:07:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.635 08:07:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:16:46.635 08:07:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:16:46.635 08:07:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.635 08:07:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.635 08:07:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:46.635 08:07:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:46.635 08:07:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.635 08:07:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.635 08:07:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.635 08:07:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.635 08:07:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.635 08:07:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.635 08:07:57 -- paths/export.sh@5 -- # export PATH 00:16:46.635 08:07:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.635 08:07:57 -- nvmf/common.sh@46 -- # : 0 00:16:46.635 08:07:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:46.635 08:07:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:46.635 08:07:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:46.635 08:07:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.635 08:07:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.635 08:07:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:46.635 08:07:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:46.635 08:07:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:46.635 08:07:57 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:46.635 08:07:57 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:46.635 08:07:57 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:46.635 08:07:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:46.635 08:07:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.635 08:07:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:46.635 08:07:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:46.635 08:07:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:46.635 08:07:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.635 08:07:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.635 08:07:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.635 08:07:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:46.635 08:07:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:46.635 08:07:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:46.635 08:07:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:46.635 08:07:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:46.635 08:07:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:46.635 08:07:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.635 08:07:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:46.635 08:07:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:46.635 08:07:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:46.635 08:07:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:46.635 08:07:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:46.635 08:07:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:46.635 08:07:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.635 08:07:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:46.635 08:07:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:46.635 08:07:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:46.635 08:07:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:46.635 08:07:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:46.635 08:07:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:46.635 Cannot find device "nvmf_tgt_br" 00:16:46.635 08:07:57 -- nvmf/common.sh@154 -- # true 00:16:46.635 08:07:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:46.635 Cannot find device "nvmf_tgt_br2" 00:16:46.635 08:07:57 -- nvmf/common.sh@155 -- # true 00:16:46.635 08:07:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:46.635 08:07:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:46.635 Cannot find device "nvmf_tgt_br" 00:16:46.635 08:07:57 -- nvmf/common.sh@157 -- # true 00:16:46.635 08:07:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:46.635 Cannot find device "nvmf_tgt_br2" 00:16:46.635 08:07:57 -- nvmf/common.sh@158 -- # true 00:16:46.635 08:07:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:46.635 08:07:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:46.635 08:07:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:46.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:46.635 08:07:57 -- nvmf/common.sh@161 -- # true 00:16:46.635 08:07:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:46.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:46.635 08:07:57 -- nvmf/common.sh@162 -- # true 00:16:46.635 08:07:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:46.635 08:07:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:46.636 08:07:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:46.636 08:07:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:46.636 08:07:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:46.636 08:07:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:46.636 08:07:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:46.636 08:07:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:46.636 08:07:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:46.894 08:07:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:46.894 08:07:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:46.894 08:07:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:46.894 08:07:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:46.894 08:07:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:46.894 08:07:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:46.894 08:07:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:46.894 08:07:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:46.894 08:07:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:46.894 08:07:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:46.894 08:07:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:46.894 08:07:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:46.894 08:07:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:46.894 08:07:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:46.894 08:07:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:46.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:16:46.894 00:16:46.894 --- 10.0.0.2 ping statistics --- 00:16:46.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.894 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:46.894 08:07:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:46.894 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:46.894 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:16:46.894 00:16:46.894 --- 10.0.0.3 ping statistics --- 00:16:46.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.894 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:46.894 08:07:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:46.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:16:46.894 00:16:46.894 --- 10.0.0.1 ping statistics --- 00:16:46.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.894 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:16:46.894 08:07:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.894 08:07:58 -- nvmf/common.sh@421 -- # return 0 00:16:46.894 08:07:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:46.894 08:07:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.894 08:07:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:46.894 08:07:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:46.894 08:07:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.894 08:07:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:46.894 08:07:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:46.894 08:07:58 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:46.894 08:07:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:46.894 08:07:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:46.894 08:07:58 -- common/autotest_common.sh@10 -- # set +x 00:16:46.894 08:07:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:46.894 08:07:58 -- nvmf/common.sh@469 -- # nvmfpid=88124 00:16:46.894 08:07:58 -- nvmf/common.sh@470 -- # waitforlisten 88124 00:16:46.894 08:07:58 -- common/autotest_common.sh@829 -- # '[' -z 88124 ']' 00:16:46.894 08:07:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.894 08:07:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:46.894 08:07:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.894 08:07:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:46.894 08:07:58 -- common/autotest_common.sh@10 -- # set +x 00:16:46.894 [2024-12-07 08:07:58.093439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:46.895 [2024-12-07 08:07:58.093540] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:47.153 [2024-12-07 08:07:58.229967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:47.153 [2024-12-07 08:07:58.318730] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:47.153 [2024-12-07 08:07:58.318894] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.153 [2024-12-07 08:07:58.318907] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.153 [2024-12-07 08:07:58.318916] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.153 [2024-12-07 08:07:58.319053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:47.153 [2024-12-07 08:07:58.319220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:47.153 [2024-12-07 08:07:58.319482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:47.153 [2024-12-07 08:07:58.319629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:48.091 08:07:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:48.091 08:07:59 -- common/autotest_common.sh@862 -- # return 0 00:16:48.091 08:07:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:48.091 08:07:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:48.091 08:07:59 -- common/autotest_common.sh@10 -- # set +x 00:16:48.091 08:07:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.091 08:07:59 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:48.091 08:07:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.091 08:07:59 -- common/autotest_common.sh@10 -- # set +x 00:16:48.091 [2024-12-07 08:07:59.122489] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:48.091 08:07:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.091 08:07:59 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:48.091 08:07:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.091 08:07:59 -- common/autotest_common.sh@10 -- # set +x 00:16:48.091 Malloc0 00:16:48.091 08:07:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.091 08:07:59 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:48.091 08:07:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.091 08:07:59 -- common/autotest_common.sh@10 -- # set +x 00:16:48.091 08:07:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.091 08:07:59 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:48.091 08:07:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.091 08:07:59 -- common/autotest_common.sh@10 -- # set +x 00:16:48.091 08:07:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.091 08:07:59 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:48.091 08:07:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.091 08:07:59 -- common/autotest_common.sh@10 -- # set +x 00:16:48.091 [2024-12-07 08:07:59.164987] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.091 08:07:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.091 08:07:59 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:48.091 08:07:59 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:48.091 08:07:59 -- nvmf/common.sh@520 -- # config=() 00:16:48.091 08:07:59 -- nvmf/common.sh@520 -- # local subsystem config 00:16:48.091 08:07:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:48.091 08:07:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:48.091 { 00:16:48.091 "params": { 00:16:48.091 "name": "Nvme$subsystem", 00:16:48.091 "trtype": "$TEST_TRANSPORT", 00:16:48.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:48.091 "adrfam": "ipv4", 00:16:48.091 "trsvcid": "$NVMF_PORT", 00:16:48.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:48.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:48.091 "hdgst": ${hdgst:-false}, 00:16:48.091 "ddgst": ${ddgst:-false} 00:16:48.092 }, 00:16:48.092 "method": "bdev_nvme_attach_controller" 00:16:48.092 } 00:16:48.092 EOF 00:16:48.092 )") 00:16:48.092 08:07:59 -- nvmf/common.sh@542 -- # cat 00:16:48.092 08:07:59 -- nvmf/common.sh@544 -- # jq . 00:16:48.092 08:07:59 -- nvmf/common.sh@545 -- # IFS=, 00:16:48.092 08:07:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:48.092 "params": { 00:16:48.092 "name": "Nvme1", 00:16:48.092 "trtype": "tcp", 00:16:48.092 "traddr": "10.0.0.2", 00:16:48.092 "adrfam": "ipv4", 00:16:48.092 "trsvcid": "4420", 00:16:48.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:48.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:48.092 "hdgst": false, 00:16:48.092 "ddgst": false 00:16:48.092 }, 00:16:48.092 "method": "bdev_nvme_attach_controller" 00:16:48.092 }' 00:16:48.092 [2024-12-07 08:07:59.221961] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:48.092 [2024-12-07 08:07:59.222050] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid88178 ] 00:16:48.351 [2024-12-07 08:07:59.369624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:48.351 [2024-12-07 08:07:59.508603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.351 [2024-12-07 08:07:59.508739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.351 [2024-12-07 08:07:59.508746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.617 [2024-12-07 08:07:59.675750] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:48.617 [2024-12-07 08:07:59.675797] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:48.617 I/O targets: 00:16:48.617 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:48.617 00:16:48.617 00:16:48.617 CUnit - A unit testing framework for C - Version 2.1-3 00:16:48.617 http://cunit.sourceforge.net/ 00:16:48.617 00:16:48.617 00:16:48.617 Suite: bdevio tests on: Nvme1n1 00:16:48.617 Test: blockdev write read block ...passed 00:16:48.617 Test: blockdev write zeroes read block ...passed 00:16:48.617 Test: blockdev write zeroes read no split ...passed 00:16:48.617 Test: blockdev write zeroes read split ...passed 00:16:48.617 Test: blockdev write zeroes read split partial ...passed 00:16:48.617 Test: blockdev reset ...[2024-12-07 08:07:59.802334] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:48.617 [2024-12-07 08:07:59.802426] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d2820 (9): Bad file descriptor 00:16:48.617 [2024-12-07 08:07:59.814781] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:48.617 passed 00:16:48.617 Test: blockdev write read 8 blocks ...passed 00:16:48.617 Test: blockdev write read size > 128k ...passed 00:16:48.617 Test: blockdev write read invalid size ...passed 00:16:48.617 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:48.617 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:48.617 Test: blockdev write read max offset ...passed 00:16:48.876 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:48.876 Test: blockdev writev readv 8 blocks ...passed 00:16:48.876 Test: blockdev writev readv 30 x 1block ...passed 00:16:48.876 Test: blockdev writev readv block ...passed 00:16:48.876 Test: blockdev writev readv size > 128k ...passed 00:16:48.876 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:48.876 Test: blockdev comparev and writev ...[2024-12-07 08:07:59.988691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.876 [2024-12-07 08:07:59.988735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:48.876 [2024-12-07 08:07:59.988757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.876 [2024-12-07 08:07:59.988769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:48.876 [2024-12-07 08:07:59.989097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.876 [2024-12-07 08:07:59.989124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:48.876 [2024-12-07 08:07:59.989142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.876 [2024-12-07 08:07:59.989153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:48.876 [2024-12-07 08:07:59.989472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.876 [2024-12-07 08:07:59.989499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:48.876 [2024-12-07 08:07:59.989517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.876 [2024-12-07 08:07:59.989527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:48.876 [2024-12-07 08:07:59.989850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.876 [2024-12-07 08:07:59.989875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:48.876 [2024-12-07 08:07:59.989893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:48.876 [2024-12-07 08:07:59.989904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:48.876 passed 00:16:48.876 Test: blockdev nvme passthru rw ...passed 00:16:48.876 Test: blockdev nvme passthru vendor specific ...[2024-12-07 08:08:00.071555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:48.876 [2024-12-07 08:08:00.071604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:48.876 [2024-12-07 08:08:00.071729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:48.876 [2024-12-07 08:08:00.071746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:48.876 [2024-12-07 08:08:00.071860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:48.876 [2024-12-07 08:08:00.071886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:48.876 [2024-12-07 08:08:00.071999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:48.876 [2024-12-07 08:08:00.072023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:48.876 passed 00:16:48.876 Test: blockdev nvme admin passthru ...passed 00:16:48.876 Test: blockdev copy ...passed 00:16:48.876 00:16:48.876 Run Summary: Type Total Ran Passed Failed Inactive 00:16:48.876 suites 1 1 n/a 0 0 00:16:48.876 tests 23 23 23 0 0 00:16:48.876 asserts 152 152 152 0 n/a 00:16:48.876 00:16:48.876 Elapsed time = 0.897 seconds 00:16:49.443 08:08:00 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.443 08:08:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.443 08:08:00 -- common/autotest_common.sh@10 -- # set +x 00:16:49.443 08:08:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.443 08:08:00 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:49.443 08:08:00 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:49.443 08:08:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:49.443 08:08:00 -- nvmf/common.sh@116 -- # sync 00:16:49.443 08:08:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:49.443 08:08:00 -- nvmf/common.sh@119 -- # set +e 00:16:49.443 08:08:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:49.443 08:08:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:49.443 rmmod nvme_tcp 00:16:49.443 rmmod nvme_fabrics 00:16:49.443 rmmod nvme_keyring 00:16:49.443 08:08:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:49.443 08:08:00 -- nvmf/common.sh@123 -- # set -e 00:16:49.443 08:08:00 -- nvmf/common.sh@124 -- # return 0 00:16:49.443 08:08:00 -- nvmf/common.sh@477 -- # '[' -n 88124 ']' 00:16:49.443 08:08:00 -- nvmf/common.sh@478 -- # killprocess 88124 00:16:49.443 08:08:00 -- common/autotest_common.sh@936 -- # '[' -z 88124 ']' 00:16:49.443 08:08:00 -- common/autotest_common.sh@940 -- # kill -0 88124 00:16:49.443 08:08:00 -- common/autotest_common.sh@941 -- # uname 00:16:49.443 08:08:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:49.443 08:08:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88124 00:16:49.443 08:08:00 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:16:49.443 08:08:00 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:16:49.443 08:08:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88124' 00:16:49.443 killing process with pid 88124 00:16:49.443 08:08:00 -- common/autotest_common.sh@955 -- # kill 88124 00:16:49.443 08:08:00 -- common/autotest_common.sh@960 -- # wait 88124 00:16:49.702 08:08:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:49.702 08:08:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:49.702 08:08:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:49.702 08:08:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:49.702 08:08:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:49.702 08:08:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.702 08:08:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.702 08:08:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.961 08:08:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:49.961 00:16:49.961 real 0m3.499s 00:16:49.961 user 0m12.393s 00:16:49.961 sys 0m1.256s 00:16:49.961 08:08:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:49.961 ************************************ 00:16:49.961 END TEST nvmf_bdevio_no_huge 00:16:49.961 ************************************ 00:16:49.961 08:08:00 -- common/autotest_common.sh@10 -- # set +x 00:16:49.961 08:08:01 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:49.961 08:08:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:49.961 08:08:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:49.961 08:08:01 -- common/autotest_common.sh@10 -- # set +x 00:16:49.961 ************************************ 00:16:49.961 START TEST nvmf_tls 00:16:49.961 ************************************ 00:16:49.961 08:08:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:49.961 * Looking for test storage... 00:16:49.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:49.961 08:08:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:49.961 08:08:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:49.961 08:08:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:50.220 08:08:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:50.220 08:08:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:50.220 08:08:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:50.220 08:08:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:50.220 08:08:01 -- scripts/common.sh@335 -- # IFS=.-: 00:16:50.220 08:08:01 -- scripts/common.sh@335 -- # read -ra ver1 00:16:50.220 08:08:01 -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.220 08:08:01 -- scripts/common.sh@336 -- # read -ra ver2 00:16:50.220 08:08:01 -- scripts/common.sh@337 -- # local 'op=<' 00:16:50.220 08:08:01 -- scripts/common.sh@339 -- # ver1_l=2 00:16:50.220 08:08:01 -- scripts/common.sh@340 -- # ver2_l=1 00:16:50.220 08:08:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:50.220 08:08:01 -- scripts/common.sh@343 -- # case "$op" in 00:16:50.220 08:08:01 -- scripts/common.sh@344 -- # : 1 00:16:50.220 08:08:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:50.220 08:08:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.220 08:08:01 -- scripts/common.sh@364 -- # decimal 1 00:16:50.220 08:08:01 -- scripts/common.sh@352 -- # local d=1 00:16:50.220 08:08:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.220 08:08:01 -- scripts/common.sh@354 -- # echo 1 00:16:50.220 08:08:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:50.220 08:08:01 -- scripts/common.sh@365 -- # decimal 2 00:16:50.220 08:08:01 -- scripts/common.sh@352 -- # local d=2 00:16:50.220 08:08:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.220 08:08:01 -- scripts/common.sh@354 -- # echo 2 00:16:50.220 08:08:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:50.220 08:08:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:50.220 08:08:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:50.220 08:08:01 -- scripts/common.sh@367 -- # return 0 00:16:50.220 08:08:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.220 08:08:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:50.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.220 --rc genhtml_branch_coverage=1 00:16:50.220 --rc genhtml_function_coverage=1 00:16:50.220 --rc genhtml_legend=1 00:16:50.220 --rc geninfo_all_blocks=1 00:16:50.220 --rc geninfo_unexecuted_blocks=1 00:16:50.220 00:16:50.220 ' 00:16:50.220 08:08:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:50.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.220 --rc genhtml_branch_coverage=1 00:16:50.220 --rc genhtml_function_coverage=1 00:16:50.220 --rc genhtml_legend=1 00:16:50.220 --rc geninfo_all_blocks=1 00:16:50.220 --rc geninfo_unexecuted_blocks=1 00:16:50.220 00:16:50.220 ' 00:16:50.220 08:08:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:50.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.220 --rc genhtml_branch_coverage=1 00:16:50.220 --rc genhtml_function_coverage=1 00:16:50.221 --rc genhtml_legend=1 00:16:50.221 --rc geninfo_all_blocks=1 00:16:50.221 --rc geninfo_unexecuted_blocks=1 00:16:50.221 00:16:50.221 ' 00:16:50.221 08:08:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:50.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.221 --rc genhtml_branch_coverage=1 00:16:50.221 --rc genhtml_function_coverage=1 00:16:50.221 --rc genhtml_legend=1 00:16:50.221 --rc geninfo_all_blocks=1 00:16:50.221 --rc geninfo_unexecuted_blocks=1 00:16:50.221 00:16:50.221 ' 00:16:50.221 08:08:01 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:50.221 08:08:01 -- nvmf/common.sh@7 -- # uname -s 00:16:50.221 08:08:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.221 08:08:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.221 08:08:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.221 08:08:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.221 08:08:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.221 08:08:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.221 08:08:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.221 08:08:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.221 08:08:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.221 08:08:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.221 08:08:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:16:50.221 08:08:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:16:50.221 08:08:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.221 08:08:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.221 08:08:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:50.221 08:08:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:50.221 08:08:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.221 08:08:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.221 08:08:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.221 08:08:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.221 08:08:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.221 08:08:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.221 08:08:01 -- paths/export.sh@5 -- # export PATH 00:16:50.221 08:08:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.221 08:08:01 -- nvmf/common.sh@46 -- # : 0 00:16:50.221 08:08:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:50.221 08:08:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:50.221 08:08:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:50.221 08:08:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.221 08:08:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.221 08:08:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:50.221 08:08:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:50.221 08:08:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:50.221 08:08:01 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:50.221 08:08:01 -- target/tls.sh@71 -- # nvmftestinit 00:16:50.221 08:08:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:50.221 08:08:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.221 08:08:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:50.221 08:08:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:50.221 08:08:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:50.221 08:08:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.221 08:08:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.221 08:08:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.221 08:08:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:50.221 08:08:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:50.221 08:08:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:50.221 08:08:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:50.221 08:08:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:50.221 08:08:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:50.221 08:08:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.221 08:08:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:50.221 08:08:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:50.221 08:08:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:50.221 08:08:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:50.221 08:08:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:50.221 08:08:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:50.221 08:08:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.221 08:08:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:50.221 08:08:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:50.221 08:08:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:50.221 08:08:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:50.221 08:08:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:50.221 08:08:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:50.221 Cannot find device "nvmf_tgt_br" 00:16:50.221 08:08:01 -- nvmf/common.sh@154 -- # true 00:16:50.221 08:08:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:50.221 Cannot find device "nvmf_tgt_br2" 00:16:50.221 08:08:01 -- nvmf/common.sh@155 -- # true 00:16:50.221 08:08:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:50.221 08:08:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:50.221 Cannot find device "nvmf_tgt_br" 00:16:50.221 08:08:01 -- nvmf/common.sh@157 -- # true 00:16:50.221 08:08:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:50.221 Cannot find device "nvmf_tgt_br2" 00:16:50.221 08:08:01 -- nvmf/common.sh@158 -- # true 00:16:50.221 08:08:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:50.221 08:08:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:50.221 08:08:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:50.221 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.221 08:08:01 -- nvmf/common.sh@161 -- # true 00:16:50.221 08:08:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:50.221 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.221 08:08:01 -- nvmf/common.sh@162 -- # true 00:16:50.221 08:08:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:50.221 08:08:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:50.221 08:08:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:50.221 08:08:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:50.221 08:08:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:50.480 08:08:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:50.480 08:08:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:50.480 08:08:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:50.480 08:08:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:50.480 08:08:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:50.480 08:08:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:50.480 08:08:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:50.480 08:08:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:50.480 08:08:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:50.480 08:08:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:50.480 08:08:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:50.480 08:08:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:50.480 08:08:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:50.480 08:08:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:50.480 08:08:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:50.480 08:08:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:50.480 08:08:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:50.480 08:08:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:50.480 08:08:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:50.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:50.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:16:50.480 00:16:50.480 --- 10.0.0.2 ping statistics --- 00:16:50.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.480 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:50.480 08:08:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:50.480 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:50.480 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:16:50.480 00:16:50.480 --- 10.0.0.3 ping statistics --- 00:16:50.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.480 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:50.480 08:08:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:50.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:50.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:16:50.480 00:16:50.480 --- 10.0.0.1 ping statistics --- 00:16:50.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.480 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:16:50.480 08:08:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:50.480 08:08:01 -- nvmf/common.sh@421 -- # return 0 00:16:50.480 08:08:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:50.480 08:08:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:50.480 08:08:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:50.480 08:08:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:50.480 08:08:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:50.480 08:08:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:50.480 08:08:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:50.480 08:08:01 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:50.480 08:08:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:50.480 08:08:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:50.480 08:08:01 -- common/autotest_common.sh@10 -- # set +x 00:16:50.480 08:08:01 -- nvmf/common.sh@469 -- # nvmfpid=88374 00:16:50.480 08:08:01 -- nvmf/common.sh@470 -- # waitforlisten 88374 00:16:50.480 08:08:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:50.480 08:08:01 -- common/autotest_common.sh@829 -- # '[' -z 88374 ']' 00:16:50.480 08:08:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.480 08:08:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:50.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.480 08:08:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.480 08:08:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:50.480 08:08:01 -- common/autotest_common.sh@10 -- # set +x 00:16:50.480 [2024-12-07 08:08:01.714676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:50.480 [2024-12-07 08:08:01.714798] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.740 [2024-12-07 08:08:01.858421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.740 [2024-12-07 08:08:01.944330] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:50.740 [2024-12-07 08:08:01.944505] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.740 [2024-12-07 08:08:01.944527] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.740 [2024-12-07 08:08:01.944551] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.740 [2024-12-07 08:08:01.944597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.678 08:08:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.678 08:08:02 -- common/autotest_common.sh@862 -- # return 0 00:16:51.678 08:08:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:51.678 08:08:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:51.678 08:08:02 -- common/autotest_common.sh@10 -- # set +x 00:16:51.678 08:08:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.678 08:08:02 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:51.678 08:08:02 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:51.938 true 00:16:51.938 08:08:02 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:51.938 08:08:02 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:52.197 08:08:03 -- target/tls.sh@82 -- # version=0 00:16:52.197 08:08:03 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:52.197 08:08:03 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:52.456 08:08:03 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:52.456 08:08:03 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:52.715 08:08:03 -- target/tls.sh@90 -- # version=13 00:16:52.715 08:08:03 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:52.715 08:08:03 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:52.974 08:08:04 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:52.974 08:08:04 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:53.232 08:08:04 -- target/tls.sh@98 -- # version=7 00:16:53.232 08:08:04 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:53.232 08:08:04 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:53.232 08:08:04 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:53.232 08:08:04 -- target/tls.sh@105 -- # ktls=false 00:16:53.233 08:08:04 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:53.233 08:08:04 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:53.492 08:08:04 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:53.492 08:08:04 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:53.751 08:08:04 -- target/tls.sh@113 -- # ktls=true 00:16:53.751 08:08:04 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:53.751 08:08:04 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:54.011 08:08:05 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:54.011 08:08:05 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:54.270 08:08:05 -- target/tls.sh@121 -- # ktls=false 00:16:54.270 08:08:05 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:54.270 08:08:05 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:54.270 08:08:05 -- target/tls.sh@49 -- # local key hash crc 00:16:54.270 08:08:05 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:54.270 08:08:05 -- target/tls.sh@51 -- # hash=01 00:16:54.270 08:08:05 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:54.270 08:08:05 -- target/tls.sh@52 -- # gzip -1 -c 00:16:54.270 08:08:05 -- target/tls.sh@52 -- # head -c 4 00:16:54.270 08:08:05 -- target/tls.sh@52 -- # tail -c8 00:16:54.270 08:08:05 -- target/tls.sh@52 -- # crc='p$H�' 00:16:54.528 08:08:05 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:54.528 08:08:05 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:54.528 08:08:05 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:54.528 08:08:05 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:54.528 08:08:05 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:54.528 08:08:05 -- target/tls.sh@49 -- # local key hash crc 00:16:54.528 08:08:05 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:54.528 08:08:05 -- target/tls.sh@51 -- # hash=01 00:16:54.528 08:08:05 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:54.528 08:08:05 -- target/tls.sh@52 -- # gzip -1 -c 00:16:54.528 08:08:05 -- target/tls.sh@52 -- # tail -c8 00:16:54.528 08:08:05 -- target/tls.sh@52 -- # head -c 4 00:16:54.528 08:08:05 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:54.528 08:08:05 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:54.528 08:08:05 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:54.528 08:08:05 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:54.528 08:08:05 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:54.528 08:08:05 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:54.528 08:08:05 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:54.528 08:08:05 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:54.528 08:08:05 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:54.529 08:08:05 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:54.529 08:08:05 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:54.529 08:08:05 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:54.529 08:08:05 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:55.097 08:08:06 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:55.097 08:08:06 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:55.097 08:08:06 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:55.355 [2024-12-07 08:08:06.412005] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.355 08:08:06 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:55.615 08:08:06 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:55.874 [2024-12-07 08:08:06.908146] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:55.874 [2024-12-07 08:08:06.908420] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.874 08:08:06 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:56.134 malloc0 00:16:56.134 08:08:07 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:56.426 08:08:07 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:56.426 08:08:07 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:08.671 Initializing NVMe Controllers 00:17:08.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:08.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:08.671 Initialization complete. Launching workers. 00:17:08.671 ======================================================== 00:17:08.671 Latency(us) 00:17:08.671 Device Information : IOPS MiB/s Average min max 00:17:08.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11540.77 45.08 5546.42 1439.94 7647.47 00:17:08.671 ======================================================== 00:17:08.671 Total : 11540.77 45.08 5546.42 1439.94 7647.47 00:17:08.671 00:17:08.671 08:08:17 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:08.671 08:08:17 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:08.671 08:08:17 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:08.671 08:08:17 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:08.671 08:08:17 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:08.671 08:08:17 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:08.671 08:08:17 -- target/tls.sh@28 -- # bdevperf_pid=88742 00:17:08.671 08:08:17 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:08.671 08:08:17 -- target/tls.sh@31 -- # waitforlisten 88742 /var/tmp/bdevperf.sock 00:17:08.671 08:08:17 -- common/autotest_common.sh@829 -- # '[' -z 88742 ']' 00:17:08.671 08:08:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:08.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:08.672 08:08:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.672 08:08:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:08.672 08:08:17 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:08.672 08:08:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.672 08:08:17 -- common/autotest_common.sh@10 -- # set +x 00:17:08.672 [2024-12-07 08:08:17.910946] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:08.672 [2024-12-07 08:08:17.911724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88742 ] 00:17:08.672 [2024-12-07 08:08:18.049650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.672 [2024-12-07 08:08:18.127834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.672 08:08:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.672 08:08:18 -- common/autotest_common.sh@862 -- # return 0 00:17:08.672 08:08:18 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:08.672 [2024-12-07 08:08:19.166678] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:08.672 TLSTESTn1 00:17:08.672 08:08:19 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:08.672 Running I/O for 10 seconds... 00:17:18.654 00:17:18.654 Latency(us) 00:17:18.654 [2024-12-07T08:08:29.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.654 [2024-12-07T08:08:29.930Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:18.654 Verification LBA range: start 0x0 length 0x2000 00:17:18.654 TLSTESTn1 : 10.01 6378.77 24.92 0.00 0.00 20035.49 5689.72 22282.24 00:17:18.654 [2024-12-07T08:08:29.930Z] =================================================================================================================== 00:17:18.654 [2024-12-07T08:08:29.930Z] Total : 6378.77 24.92 0.00 0.00 20035.49 5689.72 22282.24 00:17:18.654 0 00:17:18.654 08:08:29 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:18.654 08:08:29 -- target/tls.sh@45 -- # killprocess 88742 00:17:18.654 08:08:29 -- common/autotest_common.sh@936 -- # '[' -z 88742 ']' 00:17:18.654 08:08:29 -- common/autotest_common.sh@940 -- # kill -0 88742 00:17:18.654 08:08:29 -- common/autotest_common.sh@941 -- # uname 00:17:18.654 08:08:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:18.654 08:08:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88742 00:17:18.654 killing process with pid 88742 00:17:18.654 Received shutdown signal, test time was about 10.000000 seconds 00:17:18.654 00:17:18.654 Latency(us) 00:17:18.654 [2024-12-07T08:08:29.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.654 [2024-12-07T08:08:29.930Z] =================================================================================================================== 00:17:18.654 [2024-12-07T08:08:29.930Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:18.654 08:08:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:18.654 08:08:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:18.654 08:08:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88742' 00:17:18.654 08:08:29 -- common/autotest_common.sh@955 -- # kill 88742 00:17:18.654 08:08:29 -- common/autotest_common.sh@960 -- # wait 88742 00:17:18.654 08:08:29 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:18.654 08:08:29 -- common/autotest_common.sh@650 -- # local es=0 00:17:18.654 08:08:29 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:18.654 08:08:29 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:18.654 08:08:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.654 08:08:29 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:18.654 08:08:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:18.654 08:08:29 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:18.654 08:08:29 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:18.654 08:08:29 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:18.654 08:08:29 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:18.654 08:08:29 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:17:18.654 08:08:29 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:18.654 08:08:29 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:18.654 08:08:29 -- target/tls.sh@28 -- # bdevperf_pid=88898 00:17:18.654 08:08:29 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:18.654 08:08:29 -- target/tls.sh@31 -- # waitforlisten 88898 /var/tmp/bdevperf.sock 00:17:18.654 08:08:29 -- common/autotest_common.sh@829 -- # '[' -z 88898 ']' 00:17:18.654 08:08:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.654 08:08:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.654 08:08:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.654 08:08:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.654 08:08:29 -- common/autotest_common.sh@10 -- # set +x 00:17:18.654 [2024-12-07 08:08:29.698929] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:18.654 [2024-12-07 08:08:29.699019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88898 ] 00:17:18.654 [2024-12-07 08:08:29.830145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.654 [2024-12-07 08:08:29.897943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.590 08:08:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.590 08:08:30 -- common/autotest_common.sh@862 -- # return 0 00:17:19.590 08:08:30 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:19.848 [2024-12-07 08:08:30.897955] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:19.848 [2024-12-07 08:08:30.906105] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:19.848 [2024-12-07 08:08:30.906451] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb9cc0 (107): Transport endpoint is not connected 00:17:19.848 [2024-12-07 08:08:30.907425] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb9cc0 (9): Bad file descriptor 00:17:19.848 [2024-12-07 08:08:30.908421] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:19.848 [2024-12-07 08:08:30.908440] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:19.848 [2024-12-07 08:08:30.908449] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:19.848 2024/12/07 08:08:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:19.848 request: 00:17:19.848 { 00:17:19.848 "method": "bdev_nvme_attach_controller", 00:17:19.848 "params": { 00:17:19.848 "name": "TLSTEST", 00:17:19.848 "trtype": "tcp", 00:17:19.848 "traddr": "10.0.0.2", 00:17:19.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:19.848 "adrfam": "ipv4", 00:17:19.848 "trsvcid": "4420", 00:17:19.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.848 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:17:19.848 } 00:17:19.848 } 00:17:19.848 Got JSON-RPC error response 00:17:19.848 GoRPCClient: error on JSON-RPC call 00:17:19.848 08:08:30 -- target/tls.sh@36 -- # killprocess 88898 00:17:19.848 08:08:30 -- common/autotest_common.sh@936 -- # '[' -z 88898 ']' 00:17:19.848 08:08:30 -- common/autotest_common.sh@940 -- # kill -0 88898 00:17:19.848 08:08:30 -- common/autotest_common.sh@941 -- # uname 00:17:19.848 08:08:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:19.848 08:08:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88898 00:17:19.848 killing process with pid 88898 00:17:19.848 Received shutdown signal, test time was about 10.000000 seconds 00:17:19.848 00:17:19.848 Latency(us) 00:17:19.848 [2024-12-07T08:08:31.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.848 [2024-12-07T08:08:31.124Z] =================================================================================================================== 00:17:19.848 [2024-12-07T08:08:31.124Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:19.848 08:08:30 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:19.848 08:08:30 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:19.848 08:08:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88898' 00:17:19.848 08:08:30 -- common/autotest_common.sh@955 -- # kill 88898 00:17:19.848 08:08:30 -- common/autotest_common.sh@960 -- # wait 88898 00:17:20.105 08:08:31 -- target/tls.sh@37 -- # return 1 00:17:20.105 08:08:31 -- common/autotest_common.sh@653 -- # es=1 00:17:20.105 08:08:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:20.105 08:08:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:20.105 08:08:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:20.105 08:08:31 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:20.105 08:08:31 -- common/autotest_common.sh@650 -- # local es=0 00:17:20.106 08:08:31 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:20.106 08:08:31 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:20.106 08:08:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.106 08:08:31 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:20.106 08:08:31 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:20.106 08:08:31 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:20.106 08:08:31 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:20.106 08:08:31 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:20.106 08:08:31 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:20.106 08:08:31 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:20.106 08:08:31 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:20.106 08:08:31 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:20.106 08:08:31 -- target/tls.sh@28 -- # bdevperf_pid=88939 00:17:20.106 08:08:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:20.106 08:08:31 -- target/tls.sh@31 -- # waitforlisten 88939 /var/tmp/bdevperf.sock 00:17:20.106 08:08:31 -- common/autotest_common.sh@829 -- # '[' -z 88939 ']' 00:17:20.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:20.106 08:08:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:20.106 08:08:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.106 08:08:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:20.106 08:08:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.106 08:08:31 -- common/autotest_common.sh@10 -- # set +x 00:17:20.106 [2024-12-07 08:08:31.201601] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:20.106 [2024-12-07 08:08:31.201707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88939 ] 00:17:20.106 [2024-12-07 08:08:31.335914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.363 [2024-12-07 08:08:31.410629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.929 08:08:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.929 08:08:32 -- common/autotest_common.sh@862 -- # return 0 00:17:20.929 08:08:32 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:21.187 [2024-12-07 08:08:32.363516] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:21.187 [2024-12-07 08:08:32.368282] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:21.187 [2024-12-07 08:08:32.368321] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:21.187 [2024-12-07 08:08:32.368375] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:21.187 [2024-12-07 08:08:32.368998] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec3cc0 (107): Transport endpoint is not connected 00:17:21.187 [2024-12-07 08:08:32.369986] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec3cc0 (9): Bad file descriptor 00:17:21.187 [2024-12-07 08:08:32.370982] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:21.187 [2024-12-07 08:08:32.370998] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:21.187 [2024-12-07 08:08:32.371023] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:21.187 2024/12/07 08:08:32 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:21.187 request: 00:17:21.187 { 00:17:21.187 "method": "bdev_nvme_attach_controller", 00:17:21.187 "params": { 00:17:21.187 "name": "TLSTEST", 00:17:21.187 "trtype": "tcp", 00:17:21.187 "traddr": "10.0.0.2", 00:17:21.187 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:21.187 "adrfam": "ipv4", 00:17:21.188 "trsvcid": "4420", 00:17:21.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.188 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:21.188 } 00:17:21.188 } 00:17:21.188 Got JSON-RPC error response 00:17:21.188 GoRPCClient: error on JSON-RPC call 00:17:21.188 08:08:32 -- target/tls.sh@36 -- # killprocess 88939 00:17:21.188 08:08:32 -- common/autotest_common.sh@936 -- # '[' -z 88939 ']' 00:17:21.188 08:08:32 -- common/autotest_common.sh@940 -- # kill -0 88939 00:17:21.188 08:08:32 -- common/autotest_common.sh@941 -- # uname 00:17:21.188 08:08:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:21.188 08:08:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88939 00:17:21.188 killing process with pid 88939 00:17:21.188 Received shutdown signal, test time was about 10.000000 seconds 00:17:21.188 00:17:21.188 Latency(us) 00:17:21.188 [2024-12-07T08:08:32.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.188 [2024-12-07T08:08:32.464Z] =================================================================================================================== 00:17:21.188 [2024-12-07T08:08:32.464Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:21.188 08:08:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:21.188 08:08:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:21.188 08:08:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88939' 00:17:21.188 08:08:32 -- common/autotest_common.sh@955 -- # kill 88939 00:17:21.188 08:08:32 -- common/autotest_common.sh@960 -- # wait 88939 00:17:21.446 08:08:32 -- target/tls.sh@37 -- # return 1 00:17:21.446 08:08:32 -- common/autotest_common.sh@653 -- # es=1 00:17:21.446 08:08:32 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:21.446 08:08:32 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:21.446 08:08:32 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:21.446 08:08:32 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:21.446 08:08:32 -- common/autotest_common.sh@650 -- # local es=0 00:17:21.446 08:08:32 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:21.446 08:08:32 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:21.446 08:08:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.446 08:08:32 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:21.446 08:08:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.446 08:08:32 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:21.446 08:08:32 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:21.446 08:08:32 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:21.446 08:08:32 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:21.446 08:08:32 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:21.446 08:08:32 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:21.446 08:08:32 -- target/tls.sh@28 -- # bdevperf_pid=88985 00:17:21.447 08:08:32 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:21.447 08:08:32 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:21.447 08:08:32 -- target/tls.sh@31 -- # waitforlisten 88985 /var/tmp/bdevperf.sock 00:17:21.447 08:08:32 -- common/autotest_common.sh@829 -- # '[' -z 88985 ']' 00:17:21.447 08:08:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.447 08:08:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.447 08:08:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.447 08:08:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.447 08:08:32 -- common/autotest_common.sh@10 -- # set +x 00:17:21.447 [2024-12-07 08:08:32.679211] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:21.447 [2024-12-07 08:08:32.679317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88985 ] 00:17:21.763 [2024-12-07 08:08:32.812080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.763 [2024-12-07 08:08:32.885651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:22.699 08:08:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.699 08:08:33 -- common/autotest_common.sh@862 -- # return 0 00:17:22.699 08:08:33 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:22.699 [2024-12-07 08:08:33.865969] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:22.699 [2024-12-07 08:08:33.876401] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:22.699 [2024-12-07 08:08:33.876438] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:22.699 [2024-12-07 08:08:33.876487] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:22.700 [2024-12-07 08:08:33.877501] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa4cc0 (107): Transport endpoint is not connected 00:17:22.700 [2024-12-07 08:08:33.878463] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa4cc0 (9): Bad file descriptor 00:17:22.700 [2024-12-07 08:08:33.879459] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:22.700 [2024-12-07 08:08:33.879477] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:22.700 [2024-12-07 08:08:33.879486] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:22.700 2024/12/07 08:08:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:22.700 request: 00:17:22.700 { 00:17:22.700 "method": "bdev_nvme_attach_controller", 00:17:22.700 "params": { 00:17:22.700 "name": "TLSTEST", 00:17:22.700 "trtype": "tcp", 00:17:22.700 "traddr": "10.0.0.2", 00:17:22.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:22.700 "adrfam": "ipv4", 00:17:22.700 "trsvcid": "4420", 00:17:22.700 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:22.700 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:22.700 } 00:17:22.700 } 00:17:22.700 Got JSON-RPC error response 00:17:22.700 GoRPCClient: error on JSON-RPC call 00:17:22.700 08:08:33 -- target/tls.sh@36 -- # killprocess 88985 00:17:22.700 08:08:33 -- common/autotest_common.sh@936 -- # '[' -z 88985 ']' 00:17:22.700 08:08:33 -- common/autotest_common.sh@940 -- # kill -0 88985 00:17:22.700 08:08:33 -- common/autotest_common.sh@941 -- # uname 00:17:22.700 08:08:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:22.700 08:08:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88985 00:17:22.700 killing process with pid 88985 00:17:22.700 Received shutdown signal, test time was about 10.000000 seconds 00:17:22.700 00:17:22.700 Latency(us) 00:17:22.700 [2024-12-07T08:08:33.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.700 [2024-12-07T08:08:33.976Z] =================================================================================================================== 00:17:22.700 [2024-12-07T08:08:33.976Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:22.700 08:08:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:22.700 08:08:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:22.700 08:08:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88985' 00:17:22.700 08:08:33 -- common/autotest_common.sh@955 -- # kill 88985 00:17:22.700 08:08:33 -- common/autotest_common.sh@960 -- # wait 88985 00:17:22.958 08:08:34 -- target/tls.sh@37 -- # return 1 00:17:22.958 08:08:34 -- common/autotest_common.sh@653 -- # es=1 00:17:22.958 08:08:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:22.958 08:08:34 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:22.958 08:08:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:22.958 08:08:34 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:22.958 08:08:34 -- common/autotest_common.sh@650 -- # local es=0 00:17:22.958 08:08:34 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:22.958 08:08:34 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:22.958 08:08:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:22.958 08:08:34 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:22.958 08:08:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:22.958 08:08:34 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:22.958 08:08:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:22.958 08:08:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:22.958 08:08:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:22.958 08:08:34 -- target/tls.sh@23 -- # psk= 00:17:22.958 08:08:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:22.958 08:08:34 -- target/tls.sh@28 -- # bdevperf_pid=89025 00:17:22.958 08:08:34 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:22.958 08:08:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:22.958 08:08:34 -- target/tls.sh@31 -- # waitforlisten 89025 /var/tmp/bdevperf.sock 00:17:22.958 08:08:34 -- common/autotest_common.sh@829 -- # '[' -z 89025 ']' 00:17:22.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:22.958 08:08:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:22.958 08:08:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.958 08:08:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:22.958 08:08:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.958 08:08:34 -- common/autotest_common.sh@10 -- # set +x 00:17:22.958 [2024-12-07 08:08:34.180017] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:22.958 [2024-12-07 08:08:34.180136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89025 ] 00:17:23.216 [2024-12-07 08:08:34.309695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.216 [2024-12-07 08:08:34.379674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.149 08:08:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:24.149 08:08:35 -- common/autotest_common.sh@862 -- # return 0 00:17:24.149 08:08:35 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:24.149 [2024-12-07 08:08:35.376649] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:24.149 [2024-12-07 08:08:35.378316] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60b8c0 (9): Bad file descriptor 00:17:24.149 [2024-12-07 08:08:35.379312] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:24.150 [2024-12-07 08:08:35.379331] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:24.150 [2024-12-07 08:08:35.379340] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:24.150 2024/12/07 08:08:35 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:24.150 request: 00:17:24.150 { 00:17:24.150 "method": "bdev_nvme_attach_controller", 00:17:24.150 "params": { 00:17:24.150 "name": "TLSTEST", 00:17:24.150 "trtype": "tcp", 00:17:24.150 "traddr": "10.0.0.2", 00:17:24.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:24.150 "adrfam": "ipv4", 00:17:24.150 "trsvcid": "4420", 00:17:24.150 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:17:24.150 } 00:17:24.150 } 00:17:24.150 Got JSON-RPC error response 00:17:24.150 GoRPCClient: error on JSON-RPC call 00:17:24.150 08:08:35 -- target/tls.sh@36 -- # killprocess 89025 00:17:24.150 08:08:35 -- common/autotest_common.sh@936 -- # '[' -z 89025 ']' 00:17:24.150 08:08:35 -- common/autotest_common.sh@940 -- # kill -0 89025 00:17:24.150 08:08:35 -- common/autotest_common.sh@941 -- # uname 00:17:24.150 08:08:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:24.150 08:08:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89025 00:17:24.407 killing process with pid 89025 00:17:24.407 Received shutdown signal, test time was about 10.000000 seconds 00:17:24.407 00:17:24.407 Latency(us) 00:17:24.407 [2024-12-07T08:08:35.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.407 [2024-12-07T08:08:35.683Z] =================================================================================================================== 00:17:24.407 [2024-12-07T08:08:35.683Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:24.407 08:08:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:24.407 08:08:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:24.407 08:08:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89025' 00:17:24.407 08:08:35 -- common/autotest_common.sh@955 -- # kill 89025 00:17:24.407 08:08:35 -- common/autotest_common.sh@960 -- # wait 89025 00:17:24.407 08:08:35 -- target/tls.sh@37 -- # return 1 00:17:24.407 08:08:35 -- common/autotest_common.sh@653 -- # es=1 00:17:24.407 08:08:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:24.407 08:08:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:24.407 08:08:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:24.407 08:08:35 -- target/tls.sh@167 -- # killprocess 88374 00:17:24.407 08:08:35 -- common/autotest_common.sh@936 -- # '[' -z 88374 ']' 00:17:24.407 08:08:35 -- common/autotest_common.sh@940 -- # kill -0 88374 00:17:24.407 08:08:35 -- common/autotest_common.sh@941 -- # uname 00:17:24.407 08:08:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:24.407 08:08:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88374 00:17:24.407 killing process with pid 88374 00:17:24.407 08:08:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:24.407 08:08:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:24.407 08:08:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88374' 00:17:24.407 08:08:35 -- common/autotest_common.sh@955 -- # kill 88374 00:17:24.407 08:08:35 -- common/autotest_common.sh@960 -- # wait 88374 00:17:24.665 08:08:35 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:24.665 08:08:35 -- target/tls.sh@49 -- # local key hash crc 00:17:24.665 08:08:35 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:24.665 08:08:35 -- target/tls.sh@51 -- # hash=02 00:17:24.665 08:08:35 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:24.665 08:08:35 -- target/tls.sh@52 -- # head -c 4 00:17:24.665 08:08:35 -- target/tls.sh@52 -- # tail -c8 00:17:24.665 08:08:35 -- target/tls.sh@52 -- # gzip -1 -c 00:17:24.665 08:08:35 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:24.665 08:08:35 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:24.665 08:08:35 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:24.665 08:08:35 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:24.665 08:08:35 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:24.665 08:08:35 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:24.665 08:08:35 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:24.665 08:08:35 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:24.665 08:08:35 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:24.665 08:08:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:24.665 08:08:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:24.665 08:08:35 -- common/autotest_common.sh@10 -- # set +x 00:17:24.665 08:08:35 -- nvmf/common.sh@469 -- # nvmfpid=89091 00:17:24.665 08:08:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:24.665 08:08:35 -- nvmf/common.sh@470 -- # waitforlisten 89091 00:17:24.665 08:08:35 -- common/autotest_common.sh@829 -- # '[' -z 89091 ']' 00:17:24.665 08:08:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.665 08:08:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.665 08:08:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.665 08:08:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.665 08:08:35 -- common/autotest_common.sh@10 -- # set +x 00:17:24.665 [2024-12-07 08:08:35.928867] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:24.665 [2024-12-07 08:08:35.928959] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.922 [2024-12-07 08:08:36.064712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.922 [2024-12-07 08:08:36.127665] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:24.922 [2024-12-07 08:08:36.127835] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.922 [2024-12-07 08:08:36.127848] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.922 [2024-12-07 08:08:36.127857] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.922 [2024-12-07 08:08:36.127886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.853 08:08:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:25.853 08:08:36 -- common/autotest_common.sh@862 -- # return 0 00:17:25.853 08:08:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:25.853 08:08:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:25.853 08:08:36 -- common/autotest_common.sh@10 -- # set +x 00:17:25.853 08:08:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.853 08:08:36 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:25.853 08:08:36 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:25.853 08:08:36 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:26.110 [2024-12-07 08:08:37.164006] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.110 08:08:37 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:26.368 08:08:37 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:26.368 [2024-12-07 08:08:37.592108] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:26.368 [2024-12-07 08:08:37.592374] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.368 08:08:37 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:26.626 malloc0 00:17:26.883 08:08:37 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:27.161 08:08:38 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:27.161 08:08:38 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:27.161 08:08:38 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:27.161 08:08:38 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:27.161 08:08:38 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:27.161 08:08:38 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:27.161 08:08:38 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:27.161 08:08:38 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:27.161 08:08:38 -- target/tls.sh@28 -- # bdevperf_pid=89194 00:17:27.161 08:08:38 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:27.161 08:08:38 -- target/tls.sh@31 -- # waitforlisten 89194 /var/tmp/bdevperf.sock 00:17:27.161 08:08:38 -- common/autotest_common.sh@829 -- # '[' -z 89194 ']' 00:17:27.161 08:08:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:27.161 08:08:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:27.161 08:08:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:27.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:27.161 08:08:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:27.161 08:08:38 -- common/autotest_common.sh@10 -- # set +x 00:17:27.420 [2024-12-07 08:08:38.466488] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:27.420 [2024-12-07 08:08:38.466577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89194 ] 00:17:27.420 [2024-12-07 08:08:38.598508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.420 [2024-12-07 08:08:38.667498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.358 08:08:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:28.358 08:08:39 -- common/autotest_common.sh@862 -- # return 0 00:17:28.358 08:08:39 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:28.617 [2024-12-07 08:08:39.661085] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:28.617 TLSTESTn1 00:17:28.617 08:08:39 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:28.617 Running I/O for 10 seconds... 00:17:38.604 00:17:38.604 Latency(us) 00:17:38.604 [2024-12-07T08:08:49.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.604 [2024-12-07T08:08:49.880Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:38.604 Verification LBA range: start 0x0 length 0x2000 00:17:38.604 TLSTESTn1 : 10.02 5891.60 23.01 0.00 0.00 21688.84 4676.89 20137.43 00:17:38.604 [2024-12-07T08:08:49.880Z] =================================================================================================================== 00:17:38.604 [2024-12-07T08:08:49.880Z] Total : 5891.60 23.01 0.00 0.00 21688.84 4676.89 20137.43 00:17:38.604 0 00:17:38.872 08:08:49 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:38.872 08:08:49 -- target/tls.sh@45 -- # killprocess 89194 00:17:38.872 08:08:49 -- common/autotest_common.sh@936 -- # '[' -z 89194 ']' 00:17:38.872 08:08:49 -- common/autotest_common.sh@940 -- # kill -0 89194 00:17:38.872 08:08:49 -- common/autotest_common.sh@941 -- # uname 00:17:38.872 08:08:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:38.872 08:08:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89194 00:17:38.872 killing process with pid 89194 00:17:38.872 Received shutdown signal, test time was about 10.000000 seconds 00:17:38.872 00:17:38.872 Latency(us) 00:17:38.872 [2024-12-07T08:08:50.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.872 [2024-12-07T08:08:50.148Z] =================================================================================================================== 00:17:38.872 [2024-12-07T08:08:50.148Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:38.872 08:08:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:38.872 08:08:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:38.872 08:08:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89194' 00:17:38.872 08:08:49 -- common/autotest_common.sh@955 -- # kill 89194 00:17:38.872 08:08:49 -- common/autotest_common.sh@960 -- # wait 89194 00:17:38.872 08:08:50 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:38.872 08:08:50 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:38.872 08:08:50 -- common/autotest_common.sh@650 -- # local es=0 00:17:38.872 08:08:50 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:38.872 08:08:50 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:38.872 08:08:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.873 08:08:50 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:38.873 08:08:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.873 08:08:50 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:38.873 08:08:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:38.873 08:08:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:38.873 08:08:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:38.873 08:08:50 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:38.873 08:08:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:38.873 08:08:50 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:38.873 08:08:50 -- target/tls.sh@28 -- # bdevperf_pid=89341 00:17:38.873 08:08:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:38.873 08:08:50 -- target/tls.sh@31 -- # waitforlisten 89341 /var/tmp/bdevperf.sock 00:17:38.873 08:08:50 -- common/autotest_common.sh@829 -- # '[' -z 89341 ']' 00:17:38.873 08:08:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:38.873 08:08:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:38.873 08:08:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:38.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:38.873 08:08:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:38.873 08:08:50 -- common/autotest_common.sh@10 -- # set +x 00:17:39.154 [2024-12-07 08:08:50.169689] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:39.154 [2024-12-07 08:08:50.169814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89341 ] 00:17:39.154 [2024-12-07 08:08:50.302294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.154 [2024-12-07 08:08:50.379042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.094 08:08:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.094 08:08:51 -- common/autotest_common.sh@862 -- # return 0 00:17:40.094 08:08:51 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:40.352 [2024-12-07 08:08:51.426408] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.352 [2024-12-07 08:08:51.426453] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:40.352 2024/12/07 08:08:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:40.352 request: 00:17:40.352 { 00:17:40.352 "method": "bdev_nvme_attach_controller", 00:17:40.352 "params": { 00:17:40.352 "name": "TLSTEST", 00:17:40.352 "trtype": "tcp", 00:17:40.352 "traddr": "10.0.0.2", 00:17:40.352 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.352 "adrfam": "ipv4", 00:17:40.352 "trsvcid": "4420", 00:17:40.352 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.352 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:40.352 } 00:17:40.352 } 00:17:40.352 Got JSON-RPC error response 00:17:40.352 GoRPCClient: error on JSON-RPC call 00:17:40.352 08:08:51 -- target/tls.sh@36 -- # killprocess 89341 00:17:40.352 08:08:51 -- common/autotest_common.sh@936 -- # '[' -z 89341 ']' 00:17:40.352 08:08:51 -- common/autotest_common.sh@940 -- # kill -0 89341 00:17:40.352 08:08:51 -- common/autotest_common.sh@941 -- # uname 00:17:40.352 08:08:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:40.352 08:08:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89341 00:17:40.352 08:08:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:40.352 08:08:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:40.352 08:08:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89341' 00:17:40.352 killing process with pid 89341 00:17:40.352 Received shutdown signal, test time was about 10.000000 seconds 00:17:40.352 00:17:40.352 Latency(us) 00:17:40.352 [2024-12-07T08:08:51.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.352 [2024-12-07T08:08:51.628Z] =================================================================================================================== 00:17:40.352 [2024-12-07T08:08:51.628Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:40.352 08:08:51 -- common/autotest_common.sh@955 -- # kill 89341 00:17:40.352 08:08:51 -- common/autotest_common.sh@960 -- # wait 89341 00:17:40.612 08:08:51 -- target/tls.sh@37 -- # return 1 00:17:40.612 08:08:51 -- common/autotest_common.sh@653 -- # es=1 00:17:40.612 08:08:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:40.613 08:08:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:40.613 08:08:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:40.613 08:08:51 -- target/tls.sh@183 -- # killprocess 89091 00:17:40.613 08:08:51 -- common/autotest_common.sh@936 -- # '[' -z 89091 ']' 00:17:40.613 08:08:51 -- common/autotest_common.sh@940 -- # kill -0 89091 00:17:40.613 08:08:51 -- common/autotest_common.sh@941 -- # uname 00:17:40.613 08:08:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:40.613 08:08:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89091 00:17:40.613 08:08:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:40.613 08:08:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:40.613 killing process with pid 89091 00:17:40.613 08:08:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89091' 00:17:40.613 08:08:51 -- common/autotest_common.sh@955 -- # kill 89091 00:17:40.613 08:08:51 -- common/autotest_common.sh@960 -- # wait 89091 00:17:40.871 08:08:51 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:40.871 08:08:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:40.871 08:08:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:40.871 08:08:51 -- common/autotest_common.sh@10 -- # set +x 00:17:40.871 08:08:51 -- nvmf/common.sh@469 -- # nvmfpid=89397 00:17:40.871 08:08:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:40.871 08:08:51 -- nvmf/common.sh@470 -- # waitforlisten 89397 00:17:40.871 08:08:51 -- common/autotest_common.sh@829 -- # '[' -z 89397 ']' 00:17:40.871 08:08:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.871 08:08:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.871 08:08:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.871 08:08:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.871 08:08:51 -- common/autotest_common.sh@10 -- # set +x 00:17:40.871 [2024-12-07 08:08:51.968422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:40.871 [2024-12-07 08:08:51.968501] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.871 [2024-12-07 08:08:52.102122] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.129 [2024-12-07 08:08:52.165178] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:41.129 [2024-12-07 08:08:52.165334] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.129 [2024-12-07 08:08:52.165347] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.129 [2024-12-07 08:08:52.165355] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.129 [2024-12-07 08:08:52.165385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.695 08:08:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.695 08:08:52 -- common/autotest_common.sh@862 -- # return 0 00:17:41.695 08:08:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:41.695 08:08:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:41.695 08:08:52 -- common/autotest_common.sh@10 -- # set +x 00:17:41.952 08:08:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.952 08:08:52 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:41.952 08:08:52 -- common/autotest_common.sh@650 -- # local es=0 00:17:41.952 08:08:52 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:41.952 08:08:52 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:17:41.952 08:08:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:41.952 08:08:52 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:17:41.952 08:08:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:41.952 08:08:52 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:41.952 08:08:52 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:41.952 08:08:52 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:42.210 [2024-12-07 08:08:53.228355] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.210 08:08:53 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:42.468 08:08:53 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:42.468 [2024-12-07 08:08:53.712433] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:42.468 [2024-12-07 08:08:53.712738] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.468 08:08:53 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:42.727 malloc0 00:17:42.727 08:08:53 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:42.985 08:08:54 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:43.244 [2024-12-07 08:08:54.367833] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:43.244 [2024-12-07 08:08:54.367885] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:43.244 [2024-12-07 08:08:54.367902] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:43.244 2024/12/07 08:08:54 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:43.244 request: 00:17:43.244 { 00:17:43.244 "method": "nvmf_subsystem_add_host", 00:17:43.244 "params": { 00:17:43.244 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.244 "host": "nqn.2016-06.io.spdk:host1", 00:17:43.244 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:43.244 } 00:17:43.244 } 00:17:43.244 Got JSON-RPC error response 00:17:43.244 GoRPCClient: error on JSON-RPC call 00:17:43.244 08:08:54 -- common/autotest_common.sh@653 -- # es=1 00:17:43.244 08:08:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:43.244 08:08:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:43.244 08:08:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:43.244 08:08:54 -- target/tls.sh@189 -- # killprocess 89397 00:17:43.244 08:08:54 -- common/autotest_common.sh@936 -- # '[' -z 89397 ']' 00:17:43.244 08:08:54 -- common/autotest_common.sh@940 -- # kill -0 89397 00:17:43.244 08:08:54 -- common/autotest_common.sh@941 -- # uname 00:17:43.244 08:08:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:43.244 08:08:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89397 00:17:43.244 08:08:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:43.244 08:08:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:43.244 killing process with pid 89397 00:17:43.244 08:08:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89397' 00:17:43.244 08:08:54 -- common/autotest_common.sh@955 -- # kill 89397 00:17:43.244 08:08:54 -- common/autotest_common.sh@960 -- # wait 89397 00:17:43.503 08:08:54 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:43.503 08:08:54 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:43.503 08:08:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:43.503 08:08:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:43.503 08:08:54 -- common/autotest_common.sh@10 -- # set +x 00:17:43.503 08:08:54 -- nvmf/common.sh@469 -- # nvmfpid=89502 00:17:43.503 08:08:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:43.503 08:08:54 -- nvmf/common.sh@470 -- # waitforlisten 89502 00:17:43.503 08:08:54 -- common/autotest_common.sh@829 -- # '[' -z 89502 ']' 00:17:43.503 08:08:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.503 08:08:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.503 08:08:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.503 08:08:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.503 08:08:54 -- common/autotest_common.sh@10 -- # set +x 00:17:43.503 [2024-12-07 08:08:54.695902] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:43.503 [2024-12-07 08:08:54.695998] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.762 [2024-12-07 08:08:54.838083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.762 [2024-12-07 08:08:54.913183] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:43.762 [2024-12-07 08:08:54.913314] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.762 [2024-12-07 08:08:54.913327] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.762 [2024-12-07 08:08:54.913335] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.762 [2024-12-07 08:08:54.913359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.701 08:08:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:44.701 08:08:55 -- common/autotest_common.sh@862 -- # return 0 00:17:44.701 08:08:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:44.701 08:08:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:44.701 08:08:55 -- common/autotest_common.sh@10 -- # set +x 00:17:44.701 08:08:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.701 08:08:55 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:44.701 08:08:55 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:44.701 08:08:55 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:44.701 [2024-12-07 08:08:55.952827] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.701 08:08:55 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:45.277 08:08:56 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:45.277 [2024-12-07 08:08:56.492970] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:45.277 [2024-12-07 08:08:56.493449] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.277 08:08:56 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:45.536 malloc0 00:17:45.536 08:08:56 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:45.795 08:08:56 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:46.054 08:08:57 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:46.054 08:08:57 -- target/tls.sh@197 -- # bdevperf_pid=89609 00:17:46.054 08:08:57 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:46.054 08:08:57 -- target/tls.sh@200 -- # waitforlisten 89609 /var/tmp/bdevperf.sock 00:17:46.054 08:08:57 -- common/autotest_common.sh@829 -- # '[' -z 89609 ']' 00:17:46.054 08:08:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:46.054 08:08:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:46.054 08:08:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:46.054 08:08:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.054 08:08:57 -- common/autotest_common.sh@10 -- # set +x 00:17:46.054 [2024-12-07 08:08:57.207542] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:46.054 [2024-12-07 08:08:57.207635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89609 ] 00:17:46.313 [2024-12-07 08:08:57.340059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.313 [2024-12-07 08:08:57.406353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.248 08:08:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.248 08:08:58 -- common/autotest_common.sh@862 -- # return 0 00:17:47.248 08:08:58 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:47.248 [2024-12-07 08:08:58.461599] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:47.507 TLSTESTn1 00:17:47.507 08:08:58 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:47.765 08:08:58 -- target/tls.sh@205 -- # tgtconf='{ 00:17:47.765 "subsystems": [ 00:17:47.765 { 00:17:47.765 "subsystem": "iobuf", 00:17:47.765 "config": [ 00:17:47.766 { 00:17:47.766 "method": "iobuf_set_options", 00:17:47.766 "params": { 00:17:47.766 "large_bufsize": 135168, 00:17:47.766 "large_pool_count": 1024, 00:17:47.766 "small_bufsize": 8192, 00:17:47.766 "small_pool_count": 8192 00:17:47.766 } 00:17:47.766 } 00:17:47.766 ] 00:17:47.766 }, 00:17:47.766 { 00:17:47.766 "subsystem": "sock", 00:17:47.766 "config": [ 00:17:47.766 { 00:17:47.766 "method": "sock_impl_set_options", 00:17:47.766 "params": { 00:17:47.766 "enable_ktls": false, 00:17:47.766 "enable_placement_id": 0, 00:17:47.766 "enable_quickack": false, 00:17:47.766 "enable_recv_pipe": true, 00:17:47.766 "enable_zerocopy_send_client": false, 00:17:47.766 "enable_zerocopy_send_server": true, 00:17:47.766 "impl_name": "posix", 00:17:47.766 "recv_buf_size": 2097152, 00:17:47.766 "send_buf_size": 2097152, 00:17:47.766 "tls_version": 0, 00:17:47.766 "zerocopy_threshold": 0 00:17:47.766 } 00:17:47.766 }, 00:17:47.766 { 00:17:47.766 "method": "sock_impl_set_options", 00:17:47.766 "params": { 00:17:47.766 "enable_ktls": false, 00:17:47.766 "enable_placement_id": 0, 00:17:47.766 "enable_quickack": false, 00:17:47.766 "enable_recv_pipe": true, 00:17:47.766 "enable_zerocopy_send_client": false, 00:17:47.766 "enable_zerocopy_send_server": true, 00:17:47.766 "impl_name": "ssl", 00:17:47.766 "recv_buf_size": 4096, 00:17:47.766 "send_buf_size": 4096, 00:17:47.766 "tls_version": 0, 00:17:47.766 "zerocopy_threshold": 0 00:17:47.766 } 00:17:47.766 } 00:17:47.766 ] 00:17:47.766 }, 00:17:47.766 { 00:17:47.766 "subsystem": "vmd", 00:17:47.766 "config": [] 00:17:47.766 }, 00:17:47.766 { 00:17:47.766 "subsystem": "accel", 00:17:47.766 "config": [ 00:17:47.766 { 00:17:47.766 "method": "accel_set_options", 00:17:47.766 "params": { 00:17:47.766 "buf_count": 2048, 00:17:47.766 "large_cache_size": 16, 00:17:47.766 "sequence_count": 2048, 00:17:47.766 "small_cache_size": 128, 00:17:47.766 "task_count": 2048 00:17:47.766 } 00:17:47.766 } 00:17:47.766 ] 00:17:47.766 }, 00:17:47.766 { 00:17:47.766 "subsystem": "bdev", 00:17:47.766 "config": [ 00:17:47.766 { 00:17:47.766 "method": "bdev_set_options", 00:17:47.766 "params": { 00:17:47.766 "bdev_auto_examine": true, 00:17:47.766 "bdev_io_cache_size": 256, 00:17:47.766 "bdev_io_pool_size": 65535, 00:17:47.766 "iobuf_large_cache_size": 16, 00:17:47.766 "iobuf_small_cache_size": 128 00:17:47.766 } 00:17:47.766 }, 00:17:47.766 { 00:17:47.766 "method": "bdev_raid_set_options", 00:17:47.766 "params": { 00:17:47.766 "process_window_size_kb": 1024 00:17:47.766 } 00:17:47.766 }, 00:17:47.766 { 00:17:47.766 "method": "bdev_iscsi_set_options", 00:17:47.766 "params": { 00:17:47.766 "timeout_sec": 30 00:17:47.766 } 00:17:47.766 }, 00:17:47.766 { 00:17:47.766 "method": "bdev_nvme_set_options", 00:17:47.766 "params": { 00:17:47.766 "action_on_timeout": "none", 00:17:47.766 "allow_accel_sequence": false, 00:17:47.766 "arbitration_burst": 0, 00:17:47.766 "bdev_retry_count": 3, 00:17:47.766 "ctrlr_loss_timeout_sec": 0, 00:17:47.766 "delay_cmd_submit": true, 00:17:47.766 "fast_io_fail_timeout_sec": 0, 00:17:47.766 "generate_uuids": false, 00:17:47.766 "high_priority_weight": 0, 00:17:47.766 "io_path_stat": false, 00:17:47.766 "io_queue_requests": 0, 00:17:47.766 "keep_alive_timeout_ms": 10000, 00:17:47.766 "low_priority_weight": 0, 00:17:47.766 "medium_priority_weight": 0, 00:17:47.766 "nvme_adminq_poll_period_us": 10000, 00:17:47.766 "nvme_ioq_poll_period_us": 0, 00:17:47.766 "reconnect_delay_sec": 0, 00:17:47.766 "timeout_admin_us": 0, 00:17:47.766 "timeout_us": 0, 00:17:47.766 "transport_ack_timeout": 0, 00:17:47.766 "transport_retry_count": 4, 00:17:47.766 "transport_tos": 0 00:17:47.766 } 00:17:47.766 }, 00:17:47.766 { 00:17:47.766 "method": "bdev_nvme_set_hotplug", 00:17:47.766 "params": { 00:17:47.766 "enable": false, 00:17:47.766 "period_us": 100000 00:17:47.766 } 00:17:47.766 }, 00:17:47.766 { 00:17:47.766 "method": "bdev_malloc_create", 00:17:47.766 "params": { 00:17:47.766 "block_size": 4096, 00:17:47.766 "name": "malloc0", 00:17:47.766 "num_blocks": 8192, 00:17:47.766 "optimal_io_boundary": 0, 00:17:47.766 "physical_block_size": 4096, 00:17:47.766 "uuid": "dee6a9ed-5a1d-4046-81b2-fad7d102c701" 00:17:47.766 } 00:17:47.766 }, 00:17:47.766 { 00:17:47.766 "method": "bdev_wait_for_examine" 00:17:47.766 } 00:17:47.766 ] 00:17:47.766 }, 00:17:47.766 { 00:17:47.766 "subsystem": "nbd", 00:17:47.766 "config": [] 00:17:47.766 }, 00:17:47.766 { 00:17:47.766 "subsystem": "scheduler", 00:17:47.766 "config": [ 00:17:47.766 { 00:17:47.766 "method": "framework_set_scheduler", 00:17:47.766 "params": { 00:17:47.766 "name": "static" 00:17:47.766 } 00:17:47.766 } 00:17:47.766 ] 00:17:47.766 }, 00:17:47.766 { 00:17:47.766 "subsystem": "nvmf", 00:17:47.766 "config": [ 00:17:47.766 { 00:17:47.766 "method": "nvmf_set_config", 00:17:47.766 "params": { 00:17:47.766 "admin_cmd_passthru": { 00:17:47.766 "identify_ctrlr": false 00:17:47.766 }, 00:17:47.766 "discovery_filter": "match_any" 00:17:47.766 } 00:17:47.766 }, 00:17:47.766 { 00:17:47.766 "method": "nvmf_set_max_subsystems", 00:17:47.766 "params": { 00:17:47.766 "max_subsystems": 1024 00:17:47.766 } 00:17:47.766 }, 00:17:47.766 { 00:17:47.766 "method": "nvmf_set_crdt", 00:17:47.766 "params": { 00:17:47.766 "crdt1": 0, 00:17:47.766 "crdt2": 0, 00:17:47.766 "crdt3": 0 00:17:47.766 } 00:17:47.766 }, 00:17:47.766 { 00:17:47.766 "method": "nvmf_create_transport", 00:17:47.766 "params": { 00:17:47.766 "abort_timeout_sec": 1, 00:17:47.766 "buf_cache_size": 4294967295, 00:17:47.766 "c2h_success": false, 00:17:47.766 "dif_insert_or_strip": false, 00:17:47.766 "in_capsule_data_size": 4096, 00:17:47.766 "io_unit_size": 131072, 00:17:47.766 "max_aq_depth": 128, 00:17:47.766 "max_io_qpairs_per_ctrlr": 127, 00:17:47.766 "max_io_size": 131072, 00:17:47.766 "max_queue_depth": 128, 00:17:47.766 "num_shared_buffers": 511, 00:17:47.766 "sock_priority": 0, 00:17:47.766 "trtype": "TCP", 00:17:47.766 "zcopy": false 00:17:47.766 } 00:17:47.766 }, 00:17:47.766 { 00:17:47.766 "method": "nvmf_create_subsystem", 00:17:47.766 "params": { 00:17:47.766 "allow_any_host": false, 00:17:47.766 "ana_reporting": false, 00:17:47.766 "max_cntlid": 65519, 00:17:47.766 "max_namespaces": 10, 00:17:47.766 "min_cntlid": 1, 00:17:47.766 "model_number": "SPDK bdev Controller", 00:17:47.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.766 "serial_number": "SPDK00000000000001" 00:17:47.766 } 00:17:47.766 }, 00:17:47.766 { 00:17:47.766 "method": "nvmf_subsystem_add_host", 00:17:47.766 "params": { 00:17:47.766 "host": "nqn.2016-06.io.spdk:host1", 00:17:47.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.766 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:47.766 } 00:17:47.766 }, 00:17:47.766 { 00:17:47.766 "method": "nvmf_subsystem_add_ns", 00:17:47.766 "params": { 00:17:47.766 "namespace": { 00:17:47.766 "bdev_name": "malloc0", 00:17:47.766 "nguid": "DEE6A9ED5A1D404681B2FAD7D102C701", 00:17:47.766 "nsid": 1, 00:17:47.766 "uuid": "dee6a9ed-5a1d-4046-81b2-fad7d102c701" 00:17:47.766 }, 00:17:47.766 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:47.767 } 00:17:47.767 }, 00:17:47.767 { 00:17:47.767 "method": "nvmf_subsystem_add_listener", 00:17:47.767 "params": { 00:17:47.767 "listen_address": { 00:17:47.767 "adrfam": "IPv4", 00:17:47.767 "traddr": "10.0.0.2", 00:17:47.767 "trsvcid": "4420", 00:17:47.767 "trtype": "TCP" 00:17:47.767 }, 00:17:47.767 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.767 "secure_channel": true 00:17:47.767 } 00:17:47.767 } 00:17:47.767 ] 00:17:47.767 } 00:17:47.767 ] 00:17:47.767 }' 00:17:47.767 08:08:58 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:48.025 08:08:59 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:48.025 "subsystems": [ 00:17:48.025 { 00:17:48.025 "subsystem": "iobuf", 00:17:48.025 "config": [ 00:17:48.025 { 00:17:48.025 "method": "iobuf_set_options", 00:17:48.025 "params": { 00:17:48.025 "large_bufsize": 135168, 00:17:48.025 "large_pool_count": 1024, 00:17:48.025 "small_bufsize": 8192, 00:17:48.025 "small_pool_count": 8192 00:17:48.025 } 00:17:48.025 } 00:17:48.025 ] 00:17:48.026 }, 00:17:48.026 { 00:17:48.026 "subsystem": "sock", 00:17:48.026 "config": [ 00:17:48.026 { 00:17:48.026 "method": "sock_impl_set_options", 00:17:48.026 "params": { 00:17:48.026 "enable_ktls": false, 00:17:48.026 "enable_placement_id": 0, 00:17:48.026 "enable_quickack": false, 00:17:48.026 "enable_recv_pipe": true, 00:17:48.026 "enable_zerocopy_send_client": false, 00:17:48.026 "enable_zerocopy_send_server": true, 00:17:48.026 "impl_name": "posix", 00:17:48.026 "recv_buf_size": 2097152, 00:17:48.026 "send_buf_size": 2097152, 00:17:48.026 "tls_version": 0, 00:17:48.026 "zerocopy_threshold": 0 00:17:48.026 } 00:17:48.026 }, 00:17:48.026 { 00:17:48.026 "method": "sock_impl_set_options", 00:17:48.026 "params": { 00:17:48.026 "enable_ktls": false, 00:17:48.026 "enable_placement_id": 0, 00:17:48.026 "enable_quickack": false, 00:17:48.026 "enable_recv_pipe": true, 00:17:48.026 "enable_zerocopy_send_client": false, 00:17:48.026 "enable_zerocopy_send_server": true, 00:17:48.026 "impl_name": "ssl", 00:17:48.026 "recv_buf_size": 4096, 00:17:48.026 "send_buf_size": 4096, 00:17:48.026 "tls_version": 0, 00:17:48.026 "zerocopy_threshold": 0 00:17:48.026 } 00:17:48.026 } 00:17:48.026 ] 00:17:48.026 }, 00:17:48.026 { 00:17:48.026 "subsystem": "vmd", 00:17:48.026 "config": [] 00:17:48.026 }, 00:17:48.026 { 00:17:48.026 "subsystem": "accel", 00:17:48.026 "config": [ 00:17:48.026 { 00:17:48.026 "method": "accel_set_options", 00:17:48.026 "params": { 00:17:48.026 "buf_count": 2048, 00:17:48.026 "large_cache_size": 16, 00:17:48.026 "sequence_count": 2048, 00:17:48.026 "small_cache_size": 128, 00:17:48.026 "task_count": 2048 00:17:48.026 } 00:17:48.026 } 00:17:48.026 ] 00:17:48.026 }, 00:17:48.026 { 00:17:48.026 "subsystem": "bdev", 00:17:48.026 "config": [ 00:17:48.026 { 00:17:48.026 "method": "bdev_set_options", 00:17:48.026 "params": { 00:17:48.026 "bdev_auto_examine": true, 00:17:48.026 "bdev_io_cache_size": 256, 00:17:48.026 "bdev_io_pool_size": 65535, 00:17:48.026 "iobuf_large_cache_size": 16, 00:17:48.026 "iobuf_small_cache_size": 128 00:17:48.026 } 00:17:48.026 }, 00:17:48.026 { 00:17:48.026 "method": "bdev_raid_set_options", 00:17:48.026 "params": { 00:17:48.026 "process_window_size_kb": 1024 00:17:48.026 } 00:17:48.026 }, 00:17:48.026 { 00:17:48.026 "method": "bdev_iscsi_set_options", 00:17:48.026 "params": { 00:17:48.026 "timeout_sec": 30 00:17:48.026 } 00:17:48.026 }, 00:17:48.026 { 00:17:48.026 "method": "bdev_nvme_set_options", 00:17:48.026 "params": { 00:17:48.026 "action_on_timeout": "none", 00:17:48.026 "allow_accel_sequence": false, 00:17:48.026 "arbitration_burst": 0, 00:17:48.026 "bdev_retry_count": 3, 00:17:48.026 "ctrlr_loss_timeout_sec": 0, 00:17:48.026 "delay_cmd_submit": true, 00:17:48.026 "fast_io_fail_timeout_sec": 0, 00:17:48.026 "generate_uuids": false, 00:17:48.026 "high_priority_weight": 0, 00:17:48.026 "io_path_stat": false, 00:17:48.026 "io_queue_requests": 512, 00:17:48.026 "keep_alive_timeout_ms": 10000, 00:17:48.026 "low_priority_weight": 0, 00:17:48.026 "medium_priority_weight": 0, 00:17:48.026 "nvme_adminq_poll_period_us": 10000, 00:17:48.026 "nvme_ioq_poll_period_us": 0, 00:17:48.026 "reconnect_delay_sec": 0, 00:17:48.026 "timeout_admin_us": 0, 00:17:48.026 "timeout_us": 0, 00:17:48.026 "transport_ack_timeout": 0, 00:17:48.026 "transport_retry_count": 4, 00:17:48.026 "transport_tos": 0 00:17:48.026 } 00:17:48.026 }, 00:17:48.026 { 00:17:48.026 "method": "bdev_nvme_attach_controller", 00:17:48.026 "params": { 00:17:48.026 "adrfam": "IPv4", 00:17:48.026 "ctrlr_loss_timeout_sec": 0, 00:17:48.026 "ddgst": false, 00:17:48.026 "fast_io_fail_timeout_sec": 0, 00:17:48.026 "hdgst": false, 00:17:48.026 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:48.026 "name": "TLSTEST", 00:17:48.026 "prchk_guard": false, 00:17:48.026 "prchk_reftag": false, 00:17:48.026 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:48.026 "reconnect_delay_sec": 0, 00:17:48.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.026 "traddr": "10.0.0.2", 00:17:48.026 "trsvcid": "4420", 00:17:48.026 "trtype": "TCP" 00:17:48.026 } 00:17:48.026 }, 00:17:48.026 { 00:17:48.026 "method": "bdev_nvme_set_hotplug", 00:17:48.026 "params": { 00:17:48.026 "enable": false, 00:17:48.026 "period_us": 100000 00:17:48.026 } 00:17:48.026 }, 00:17:48.026 { 00:17:48.026 "method": "bdev_wait_for_examine" 00:17:48.026 } 00:17:48.026 ] 00:17:48.026 }, 00:17:48.026 { 00:17:48.026 "subsystem": "nbd", 00:17:48.026 "config": [] 00:17:48.026 } 00:17:48.026 ] 00:17:48.026 }' 00:17:48.026 08:08:59 -- target/tls.sh@208 -- # killprocess 89609 00:17:48.026 08:08:59 -- common/autotest_common.sh@936 -- # '[' -z 89609 ']' 00:17:48.026 08:08:59 -- common/autotest_common.sh@940 -- # kill -0 89609 00:17:48.026 08:08:59 -- common/autotest_common.sh@941 -- # uname 00:17:48.026 08:08:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:48.026 08:08:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89609 00:17:48.026 08:08:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:48.026 killing process with pid 89609 00:17:48.026 08:08:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:48.026 08:08:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89609' 00:17:48.026 Received shutdown signal, test time was about 10.000000 seconds 00:17:48.026 00:17:48.026 Latency(us) 00:17:48.026 [2024-12-07T08:08:59.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.026 [2024-12-07T08:08:59.302Z] =================================================================================================================== 00:17:48.026 [2024-12-07T08:08:59.302Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:48.026 08:08:59 -- common/autotest_common.sh@955 -- # kill 89609 00:17:48.026 08:08:59 -- common/autotest_common.sh@960 -- # wait 89609 00:17:48.285 08:08:59 -- target/tls.sh@209 -- # killprocess 89502 00:17:48.285 08:08:59 -- common/autotest_common.sh@936 -- # '[' -z 89502 ']' 00:17:48.285 08:08:59 -- common/autotest_common.sh@940 -- # kill -0 89502 00:17:48.285 08:08:59 -- common/autotest_common.sh@941 -- # uname 00:17:48.285 08:08:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:48.285 08:08:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89502 00:17:48.285 08:08:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:48.285 killing process with pid 89502 00:17:48.285 08:08:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:48.285 08:08:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89502' 00:17:48.285 08:08:59 -- common/autotest_common.sh@955 -- # kill 89502 00:17:48.285 08:08:59 -- common/autotest_common.sh@960 -- # wait 89502 00:17:48.544 08:08:59 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:48.544 08:08:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:48.544 08:08:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:48.544 08:08:59 -- target/tls.sh@212 -- # echo '{ 00:17:48.544 "subsystems": [ 00:17:48.544 { 00:17:48.544 "subsystem": "iobuf", 00:17:48.544 "config": [ 00:17:48.544 { 00:17:48.544 "method": "iobuf_set_options", 00:17:48.544 "params": { 00:17:48.544 "large_bufsize": 135168, 00:17:48.544 "large_pool_count": 1024, 00:17:48.544 "small_bufsize": 8192, 00:17:48.544 "small_pool_count": 8192 00:17:48.544 } 00:17:48.544 } 00:17:48.544 ] 00:17:48.544 }, 00:17:48.544 { 00:17:48.544 "subsystem": "sock", 00:17:48.544 "config": [ 00:17:48.544 { 00:17:48.544 "method": "sock_impl_set_options", 00:17:48.544 "params": { 00:17:48.544 "enable_ktls": false, 00:17:48.544 "enable_placement_id": 0, 00:17:48.544 "enable_quickack": false, 00:17:48.544 "enable_recv_pipe": true, 00:17:48.544 "enable_zerocopy_send_client": false, 00:17:48.544 "enable_zerocopy_send_server": true, 00:17:48.544 "impl_name": "posix", 00:17:48.544 "recv_buf_size": 2097152, 00:17:48.544 "send_buf_size": 2097152, 00:17:48.544 "tls_version": 0, 00:17:48.544 "zerocopy_threshold": 0 00:17:48.544 } 00:17:48.544 }, 00:17:48.544 { 00:17:48.544 "method": "sock_impl_set_options", 00:17:48.544 "params": { 00:17:48.544 "enable_ktls": false, 00:17:48.544 "enable_placement_id": 0, 00:17:48.544 "enable_quickack": false, 00:17:48.544 "enable_recv_pipe": true, 00:17:48.544 "enable_zerocopy_send_client": false, 00:17:48.544 "enable_zerocopy_send_server": true, 00:17:48.544 "impl_name": "ssl", 00:17:48.544 "recv_buf_size": 4096, 00:17:48.544 "send_buf_size": 4096, 00:17:48.544 "tls_version": 0, 00:17:48.544 "zerocopy_threshold": 0 00:17:48.544 } 00:17:48.544 } 00:17:48.544 ] 00:17:48.544 }, 00:17:48.544 { 00:17:48.544 "subsystem": "vmd", 00:17:48.544 "config": [] 00:17:48.544 }, 00:17:48.544 { 00:17:48.544 "subsystem": "accel", 00:17:48.544 "config": [ 00:17:48.544 { 00:17:48.544 "method": "accel_set_options", 00:17:48.544 "params": { 00:17:48.544 "buf_count": 2048, 00:17:48.544 "large_cache_size": 16, 00:17:48.544 "sequence_count": 2048, 00:17:48.544 "small_cache_size": 128, 00:17:48.544 "task_count": 2048 00:17:48.544 } 00:17:48.544 } 00:17:48.544 ] 00:17:48.544 }, 00:17:48.544 { 00:17:48.544 "subsystem": "bdev", 00:17:48.544 "config": [ 00:17:48.544 { 00:17:48.544 "method": "bdev_set_options", 00:17:48.544 "params": { 00:17:48.544 "bdev_auto_examine": true, 00:17:48.544 "bdev_io_cache_size": 256, 00:17:48.544 "bdev_io_pool_size": 65535, 00:17:48.544 "iobuf_large_cache_size": 16, 00:17:48.544 "iobuf_small_cache_size": 128 00:17:48.544 } 00:17:48.544 }, 00:17:48.544 { 00:17:48.545 "method": "bdev_raid_set_options", 00:17:48.545 "params": { 00:17:48.545 "process_window_size_kb": 1024 00:17:48.545 } 00:17:48.545 }, 00:17:48.545 { 00:17:48.545 "method": "bdev_iscsi_set_options", 00:17:48.545 "params": { 00:17:48.545 "timeout_sec": 30 00:17:48.545 } 00:17:48.545 }, 00:17:48.545 { 00:17:48.545 "method": "bdev_nvme_set_options", 00:17:48.545 "params": { 00:17:48.545 "action_on_timeout": "none", 00:17:48.545 "allow_accel_sequence": false, 00:17:48.545 "arbitration_burst": 0, 00:17:48.545 "bdev_retry_count": 3, 00:17:48.545 "ctrlr_loss_timeout_sec": 0, 00:17:48.545 "delay_cmd_submit": true, 00:17:48.545 "fast_io_fail_timeout_sec": 0, 00:17:48.545 "generate_uuids": false, 00:17:48.545 "high_priority_weight": 0, 00:17:48.545 "io_path_stat": false, 00:17:48.545 "io_queue_requests": 0, 00:17:48.545 "keep_alive_timeout_ms": 10000, 00:17:48.545 "low_priority_weight": 0, 00:17:48.545 "medium_priority_weight": 0, 00:17:48.545 "nvme_adminq_poll_period_us": 10000, 00:17:48.545 "nvme_ioq_poll_period_us": 0, 00:17:48.545 "reconnect_delay_sec": 0, 00:17:48.545 "timeout_admin_us": 0, 00:17:48.545 "timeout_us": 0, 00:17:48.545 "transport_ack_timeout": 0, 00:17:48.545 "transport_retry_count": 4, 00:17:48.545 "transport_tos": 0 00:17:48.545 } 00:17:48.545 }, 00:17:48.545 { 00:17:48.545 "method": "bdev_nvme_set_hotplug", 00:17:48.545 "params": { 00:17:48.545 "enable": false, 00:17:48.545 "period_us": 100000 00:17:48.545 } 00:17:48.545 }, 00:17:48.545 { 00:17:48.545 "method": "bdev_malloc_create", 00:17:48.545 "params": { 00:17:48.545 "block_size": 4096, 00:17:48.545 "name": "malloc0", 00:17:48.545 "num_blocks": 8192, 00:17:48.545 "optimal_io_boundary": 0, 00:17:48.545 "physical_block_size": 4096, 00:17:48.545 "uuid": "dee6a9ed-5a1d-4046-81b2-fad7d102c701" 00:17:48.545 } 00:17:48.545 }, 00:17:48.545 { 00:17:48.545 "method": "bdev_wait_for_examine" 00:17:48.545 } 00:17:48.545 ] 00:17:48.545 }, 00:17:48.545 { 00:17:48.545 "subsystem": "nbd", 00:17:48.545 "config": [] 00:17:48.545 }, 00:17:48.545 { 00:17:48.545 "subsystem": "scheduler", 00:17:48.545 "config": [ 00:17:48.545 { 00:17:48.545 "method": "framework_set_scheduler", 00:17:48.545 "params": { 00:17:48.545 "name": "static" 00:17:48.545 } 00:17:48.545 } 00:17:48.545 ] 00:17:48.545 }, 00:17:48.545 { 00:17:48.545 "subsystem": "nvmf", 00:17:48.545 "config": [ 00:17:48.545 { 00:17:48.545 "method": "nvmf_set_config", 00:17:48.545 "params": { 00:17:48.545 "admin_cmd_passthru": { 00:17:48.545 "identify_ctrlr": false 00:17:48.545 }, 00:17:48.545 "discovery_filter": "match_any" 00:17:48.545 } 00:17:48.545 }, 00:17:48.545 { 00:17:48.545 "method": "nvmf_set_max_subsystems", 00:17:48.545 "params": { 00:17:48.545 "max_subsystems": 1024 00:17:48.545 } 00:17:48.545 }, 00:17:48.545 { 00:17:48.545 "method": "nvmf_set_crdt", 00:17:48.545 "params": { 00:17:48.545 "crdt1": 0, 00:17:48.545 "crdt2": 0, 00:17:48.545 "crdt3": 0 00:17:48.545 } 00:17:48.545 }, 00:17:48.545 { 00:17:48.545 "method": "nvmf_create_transport", 00:17:48.545 "params": { 00:17:48.545 "abort_timeout_sec": 1, 00:17:48.545 "buf_cache_size": 4294967295, 00:17:48.545 "c2h_success": false, 00:17:48.545 "dif_insert_or_strip": false, 00:17:48.545 "in_capsule_data_size": 4096, 00:17:48.545 "io_unit_size": 131072, 00:17:48.545 "max_aq_depth": 128, 00:17:48.545 "max_io_qpairs_per_ctrlr": 127, 00:17:48.545 "max_io_size": 131072, 00:17:48.545 "max_queue_depth": 128, 00:17:48.545 "num_shared_buffers": 511, 00:17:48.545 "sock_priority": 0, 00:17:48.545 "trtype": "TCP", 00:17:48.545 "zcopy": false 00:17:48.545 } 00:17:48.545 }, 00:17:48.545 { 00:17:48.545 "method": "nvmf_create_subsystem", 00:17:48.545 "params": { 00:17:48.545 "allow_any_host": false, 00:17:48.545 "ana_reporting": false, 00:17:48.545 "max_cntlid": 65519, 00:17:48.545 "max_namespaces": 10, 00:17:48.545 "min_cntlid": 1, 00:17:48.545 "model_number": "SPDK bdev Controller", 00:17:48.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.545 "serial_number": "SPDK00000000000001" 00:17:48.545 } 00:17:48.545 }, 00:17:48.545 { 00:17:48.545 "method": "nvmf_subsystem_add_host", 00:17:48.545 "params": { 00:17:48.545 "host": "nqn.2016-06.io.spdk:host1", 00:17:48.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.545 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:48.545 } 00:17:48.545 }, 00:17:48.545 { 00:17:48.545 "method": "nvmf_subsystem_add_ns", 00:17:48.545 "params": { 00:17:48.545 "namespace": { 00:17:48.545 "bdev_name": "malloc0", 00:17:48.545 "nguid": "DEE6A9ED5A1D404681B2FAD7D102C701", 00:17:48.545 "nsid": 1, 00:17:48.545 "uuid": "dee6a9ed-5a1d-4046-81b2-fad7d102c701" 00:17:48.545 }, 00:17:48.545 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:48.545 } 00:17:48.545 }, 00:17:48.545 { 00:17:48.545 "method": "nvmf_subsystem_add_listener", 00:17:48.545 "params": { 00:17:48.545 "listen_address": { 00:17:48.545 "adrfam": "IPv4", 00:17:48.545 "traddr": "10.0.0.2", 00:17:48.545 "trsvcid": "4420", 00:17:48.545 "trtype": "TCP" 00:17:48.545 }, 00:17:48.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.545 "secure_channel": true 00:17:48.545 } 00:17:48.545 } 00:17:48.545 ] 00:17:48.545 } 00:17:48.545 ] 00:17:48.545 }' 00:17:48.545 08:08:59 -- common/autotest_common.sh@10 -- # set +x 00:17:48.545 08:08:59 -- nvmf/common.sh@469 -- # nvmfpid=89682 00:17:48.545 08:08:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:48.545 08:08:59 -- nvmf/common.sh@470 -- # waitforlisten 89682 00:17:48.545 08:08:59 -- common/autotest_common.sh@829 -- # '[' -z 89682 ']' 00:17:48.545 08:08:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.545 08:08:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:48.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.545 08:08:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.545 08:08:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:48.545 08:08:59 -- common/autotest_common.sh@10 -- # set +x 00:17:48.545 [2024-12-07 08:08:59.680844] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:48.545 [2024-12-07 08:08:59.680947] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.804 [2024-12-07 08:08:59.821502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.804 [2024-12-07 08:08:59.888694] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:48.804 [2024-12-07 08:08:59.888821] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.804 [2024-12-07 08:08:59.888834] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.804 [2024-12-07 08:08:59.888841] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.804 [2024-12-07 08:08:59.888872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.062 [2024-12-07 08:09:00.102826] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.062 [2024-12-07 08:09:00.134782] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:49.062 [2024-12-07 08:09:00.134992] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.630 08:09:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.630 08:09:00 -- common/autotest_common.sh@862 -- # return 0 00:17:49.630 08:09:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:49.630 08:09:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:49.630 08:09:00 -- common/autotest_common.sh@10 -- # set +x 00:17:49.630 08:09:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.630 08:09:00 -- target/tls.sh@216 -- # bdevperf_pid=89726 00:17:49.630 08:09:00 -- target/tls.sh@217 -- # waitforlisten 89726 /var/tmp/bdevperf.sock 00:17:49.630 08:09:00 -- common/autotest_common.sh@829 -- # '[' -z 89726 ']' 00:17:49.631 08:09:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:49.631 08:09:00 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:49.631 08:09:00 -- target/tls.sh@213 -- # echo '{ 00:17:49.631 "subsystems": [ 00:17:49.631 { 00:17:49.631 "subsystem": "iobuf", 00:17:49.631 "config": [ 00:17:49.631 { 00:17:49.631 "method": "iobuf_set_options", 00:17:49.631 "params": { 00:17:49.631 "large_bufsize": 135168, 00:17:49.631 "large_pool_count": 1024, 00:17:49.631 "small_bufsize": 8192, 00:17:49.631 "small_pool_count": 8192 00:17:49.631 } 00:17:49.631 } 00:17:49.631 ] 00:17:49.631 }, 00:17:49.631 { 00:17:49.631 "subsystem": "sock", 00:17:49.631 "config": [ 00:17:49.631 { 00:17:49.631 "method": "sock_impl_set_options", 00:17:49.631 "params": { 00:17:49.631 "enable_ktls": false, 00:17:49.631 "enable_placement_id": 0, 00:17:49.631 "enable_quickack": false, 00:17:49.631 "enable_recv_pipe": true, 00:17:49.631 "enable_zerocopy_send_client": false, 00:17:49.631 "enable_zerocopy_send_server": true, 00:17:49.631 "impl_name": "posix", 00:17:49.631 "recv_buf_size": 2097152, 00:17:49.631 "send_buf_size": 2097152, 00:17:49.631 "tls_version": 0, 00:17:49.631 "zerocopy_threshold": 0 00:17:49.631 } 00:17:49.631 }, 00:17:49.631 { 00:17:49.631 "method": "sock_impl_set_options", 00:17:49.631 "params": { 00:17:49.631 "enable_ktls": false, 00:17:49.631 "enable_placement_id": 0, 00:17:49.631 "enable_quickack": false, 00:17:49.631 "enable_recv_pipe": true, 00:17:49.631 "enable_zerocopy_send_client": false, 00:17:49.631 "enable_zerocopy_send_server": true, 00:17:49.631 "impl_name": "ssl", 00:17:49.631 "recv_buf_size": 4096, 00:17:49.631 "send_buf_size": 4096, 00:17:49.631 "tls_version": 0, 00:17:49.631 "zerocopy_threshold": 0 00:17:49.631 } 00:17:49.631 } 00:17:49.631 ] 00:17:49.631 }, 00:17:49.631 { 00:17:49.631 "subsystem": "vmd", 00:17:49.631 "config": [] 00:17:49.631 }, 00:17:49.631 { 00:17:49.631 "subsystem": "accel", 00:17:49.631 "config": [ 00:17:49.631 { 00:17:49.631 "method": "accel_set_options", 00:17:49.631 "params": { 00:17:49.631 "buf_count": 2048, 00:17:49.631 "large_cache_size": 16, 00:17:49.631 "sequence_count": 2048, 00:17:49.631 "small_cache_size": 128, 00:17:49.631 "task_count": 2048 00:17:49.631 } 00:17:49.631 } 00:17:49.631 ] 00:17:49.631 }, 00:17:49.631 { 00:17:49.631 "subsystem": "bdev", 00:17:49.631 "config": [ 00:17:49.631 { 00:17:49.631 "method": "bdev_set_options", 00:17:49.631 "params": { 00:17:49.631 "bdev_auto_examine": true, 00:17:49.631 "bdev_io_cache_size": 256, 00:17:49.631 "bdev_io_pool_size": 65535, 00:17:49.631 "iobuf_large_cache_size": 16, 00:17:49.631 "iobuf_small_cache_size": 128 00:17:49.631 } 00:17:49.631 }, 00:17:49.631 { 00:17:49.631 "method": "bdev_raid_set_options", 00:17:49.631 "params": { 00:17:49.631 "process_window_size_kb": 1024 00:17:49.631 } 00:17:49.631 }, 00:17:49.631 { 00:17:49.631 "method": "bdev_iscsi_set_options", 00:17:49.631 "params": { 00:17:49.631 "timeout_sec": 30 00:17:49.631 } 00:17:49.631 }, 00:17:49.631 { 00:17:49.631 "method": "bdev_nvme_set_options", 00:17:49.631 "params": { 00:17:49.631 "action_on_timeout": "none", 00:17:49.631 "allow_accel_sequence": false, 00:17:49.631 "arbitration_burst": 0, 00:17:49.631 "bdev_retry_count": 3, 00:17:49.631 "ctrlr_loss_timeout_sec": 0, 00:17:49.631 "delay_cmd_submit": true, 00:17:49.631 "fast_io_fail_timeout_sec": 0, 00:17:49.631 "generate_uuids": false, 00:17:49.631 "high_priority_weight": 0, 00:17:49.631 "io_path_stat": false, 00:17:49.631 "io_queue_requests": 512, 00:17:49.631 "keep_alive_timeout_ms": 10000, 00:17:49.631 "low_priority_weight": 0, 00:17:49.631 "medium_priority_weight": 0, 00:17:49.631 "nvme_adminq_poll_period_us": 10000, 00:17:49.631 "nvme_ioq_poll_period_us": 0, 00:17:49.631 "reconnect_delay_sec": 0, 00:17:49.631 "timeout_admin_us": 0, 00:17:49.631 "timeout_us": 0, 00:17:49.631 "transport_ack_timeout": 0, 00:17:49.631 "transport_retry_count": 4, 00:17:49.631 "transport_tos": 0 00:17:49.631 } 00:17:49.631 }, 00:17:49.631 { 00:17:49.631 "method": "bdev_nvme_attach_controller", 00:17:49.631 "params": { 00:17:49.631 "adrfam": "IPv4", 00:17:49.631 "ctrlr_loss_timeout_sec": 0, 00:17:49.631 "ddgst": false, 00:17:49.631 "fast_io_fail_timeout_sec": 0, 00:17:49.631 "hdgst": false, 00:17:49.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:49.631 "name": "TLSTEST", 00:17:49.631 "prchk_guard": false, 00:17:49.631 "prchk_reftag": false, 00:17:49.631 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:49.631 "reconnect_delay_sec": 0, 00:17:49.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.631 "traddr": "10.0.0.2", 00:17:49.631 "trsvcid": "4420", 00:17:49.631 "trtype": "TCP" 00:17:49.631 } 00:17:49.631 }, 00:17:49.631 { 00:17:49.631 "method": "bdev_nvme_set_hotplug", 00:17:49.631 "params": { 00:17:49.631 "enable": false, 00:17:49.631 "period_us": 100000 00:17:49.631 } 00:17:49.631 }, 00:17:49.631 { 00:17:49.631 "method": "bdev_wait_for_examine" 00:17:49.631 } 00:17:49.631 ] 00:17:49.631 }, 00:17:49.631 { 00:17:49.631 "subsystem": "nbd", 00:17:49.631 "config": [] 00:17:49.631 } 00:17:49.631 ] 00:17:49.631 }' 00:17:49.631 08:09:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:49.631 08:09:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:49.631 08:09:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.631 08:09:00 -- common/autotest_common.sh@10 -- # set +x 00:17:49.631 [2024-12-07 08:09:00.774655] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:49.631 [2024-12-07 08:09:00.774759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89726 ] 00:17:49.890 [2024-12-07 08:09:00.916664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.890 [2024-12-07 08:09:00.997056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.890 [2024-12-07 08:09:01.148727] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:50.827 08:09:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:50.827 08:09:01 -- common/autotest_common.sh@862 -- # return 0 00:17:50.827 08:09:01 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:50.827 Running I/O for 10 seconds... 00:18:00.817 00:18:00.817 Latency(us) 00:18:00.817 [2024-12-07T08:09:12.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.817 [2024-12-07T08:09:12.093Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:00.817 Verification LBA range: start 0x0 length 0x2000 00:18:00.817 TLSTESTn1 : 10.01 6103.89 23.84 0.00 0.00 20936.93 4527.94 25737.77 00:18:00.817 [2024-12-07T08:09:12.093Z] =================================================================================================================== 00:18:00.817 [2024-12-07T08:09:12.093Z] Total : 6103.89 23.84 0.00 0.00 20936.93 4527.94 25737.77 00:18:00.817 0 00:18:00.817 08:09:11 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:00.817 08:09:11 -- target/tls.sh@223 -- # killprocess 89726 00:18:00.817 08:09:11 -- common/autotest_common.sh@936 -- # '[' -z 89726 ']' 00:18:00.817 08:09:11 -- common/autotest_common.sh@940 -- # kill -0 89726 00:18:00.817 08:09:11 -- common/autotest_common.sh@941 -- # uname 00:18:00.817 08:09:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:00.817 08:09:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89726 00:18:00.817 08:09:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:00.817 08:09:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:00.817 killing process with pid 89726 00:18:00.817 08:09:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89726' 00:18:00.817 Received shutdown signal, test time was about 10.000000 seconds 00:18:00.817 00:18:00.817 Latency(us) 00:18:00.817 [2024-12-07T08:09:12.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.817 [2024-12-07T08:09:12.093Z] =================================================================================================================== 00:18:00.817 [2024-12-07T08:09:12.093Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:00.817 08:09:11 -- common/autotest_common.sh@955 -- # kill 89726 00:18:00.817 08:09:11 -- common/autotest_common.sh@960 -- # wait 89726 00:18:01.076 08:09:12 -- target/tls.sh@224 -- # killprocess 89682 00:18:01.076 08:09:12 -- common/autotest_common.sh@936 -- # '[' -z 89682 ']' 00:18:01.076 08:09:12 -- common/autotest_common.sh@940 -- # kill -0 89682 00:18:01.076 08:09:12 -- common/autotest_common.sh@941 -- # uname 00:18:01.076 08:09:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:01.076 08:09:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89682 00:18:01.076 08:09:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:01.076 killing process with pid 89682 00:18:01.076 08:09:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:01.076 08:09:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89682' 00:18:01.076 08:09:12 -- common/autotest_common.sh@955 -- # kill 89682 00:18:01.076 08:09:12 -- common/autotest_common.sh@960 -- # wait 89682 00:18:01.335 08:09:12 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:18:01.335 08:09:12 -- target/tls.sh@227 -- # cleanup 00:18:01.335 08:09:12 -- target/tls.sh@15 -- # process_shm --id 0 00:18:01.335 08:09:12 -- common/autotest_common.sh@806 -- # type=--id 00:18:01.335 08:09:12 -- common/autotest_common.sh@807 -- # id=0 00:18:01.335 08:09:12 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:01.335 08:09:12 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:01.335 08:09:12 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:01.335 08:09:12 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:01.335 08:09:12 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:01.335 08:09:12 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:01.335 nvmf_trace.0 00:18:01.335 08:09:12 -- common/autotest_common.sh@821 -- # return 0 00:18:01.335 08:09:12 -- target/tls.sh@16 -- # killprocess 89726 00:18:01.335 08:09:12 -- common/autotest_common.sh@936 -- # '[' -z 89726 ']' 00:18:01.335 08:09:12 -- common/autotest_common.sh@940 -- # kill -0 89726 00:18:01.335 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89726) - No such process 00:18:01.335 Process with pid 89726 is not found 00:18:01.335 08:09:12 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89726 is not found' 00:18:01.335 08:09:12 -- target/tls.sh@17 -- # nvmftestfini 00:18:01.335 08:09:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:01.335 08:09:12 -- nvmf/common.sh@116 -- # sync 00:18:01.335 08:09:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:01.335 08:09:12 -- nvmf/common.sh@119 -- # set +e 00:18:01.335 08:09:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:01.335 08:09:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:01.335 rmmod nvme_tcp 00:18:01.335 rmmod nvme_fabrics 00:18:01.335 rmmod nvme_keyring 00:18:01.335 08:09:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:01.335 08:09:12 -- nvmf/common.sh@123 -- # set -e 00:18:01.335 08:09:12 -- nvmf/common.sh@124 -- # return 0 00:18:01.335 08:09:12 -- nvmf/common.sh@477 -- # '[' -n 89682 ']' 00:18:01.335 08:09:12 -- nvmf/common.sh@478 -- # killprocess 89682 00:18:01.335 08:09:12 -- common/autotest_common.sh@936 -- # '[' -z 89682 ']' 00:18:01.335 08:09:12 -- common/autotest_common.sh@940 -- # kill -0 89682 00:18:01.335 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89682) - No such process 00:18:01.335 Process with pid 89682 is not found 00:18:01.335 08:09:12 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89682 is not found' 00:18:01.335 08:09:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:01.335 08:09:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:01.335 08:09:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:01.335 08:09:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:01.335 08:09:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:01.335 08:09:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.335 08:09:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:01.335 08:09:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.335 08:09:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:01.594 08:09:12 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:01.594 ************************************ 00:18:01.594 END TEST nvmf_tls 00:18:01.594 ************************************ 00:18:01.594 00:18:01.594 real 1m11.579s 00:18:01.594 user 1m50.391s 00:18:01.594 sys 0m24.705s 00:18:01.594 08:09:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:01.594 08:09:12 -- common/autotest_common.sh@10 -- # set +x 00:18:01.594 08:09:12 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:01.594 08:09:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:01.594 08:09:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:01.594 08:09:12 -- common/autotest_common.sh@10 -- # set +x 00:18:01.594 ************************************ 00:18:01.594 START TEST nvmf_fips 00:18:01.594 ************************************ 00:18:01.594 08:09:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:01.594 * Looking for test storage... 00:18:01.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:01.594 08:09:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:01.594 08:09:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:01.594 08:09:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:01.594 08:09:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:01.594 08:09:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:01.594 08:09:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:01.594 08:09:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:01.594 08:09:12 -- scripts/common.sh@335 -- # IFS=.-: 00:18:01.594 08:09:12 -- scripts/common.sh@335 -- # read -ra ver1 00:18:01.594 08:09:12 -- scripts/common.sh@336 -- # IFS=.-: 00:18:01.594 08:09:12 -- scripts/common.sh@336 -- # read -ra ver2 00:18:01.594 08:09:12 -- scripts/common.sh@337 -- # local 'op=<' 00:18:01.594 08:09:12 -- scripts/common.sh@339 -- # ver1_l=2 00:18:01.594 08:09:12 -- scripts/common.sh@340 -- # ver2_l=1 00:18:01.594 08:09:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:01.594 08:09:12 -- scripts/common.sh@343 -- # case "$op" in 00:18:01.594 08:09:12 -- scripts/common.sh@344 -- # : 1 00:18:01.594 08:09:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:01.594 08:09:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.594 08:09:12 -- scripts/common.sh@364 -- # decimal 1 00:18:01.594 08:09:12 -- scripts/common.sh@352 -- # local d=1 00:18:01.594 08:09:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:01.594 08:09:12 -- scripts/common.sh@354 -- # echo 1 00:18:01.594 08:09:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:01.594 08:09:12 -- scripts/common.sh@365 -- # decimal 2 00:18:01.594 08:09:12 -- scripts/common.sh@352 -- # local d=2 00:18:01.594 08:09:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:01.594 08:09:12 -- scripts/common.sh@354 -- # echo 2 00:18:01.594 08:09:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:01.594 08:09:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:01.594 08:09:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:01.594 08:09:12 -- scripts/common.sh@367 -- # return 0 00:18:01.594 08:09:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:01.594 08:09:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:01.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.594 --rc genhtml_branch_coverage=1 00:18:01.594 --rc genhtml_function_coverage=1 00:18:01.594 --rc genhtml_legend=1 00:18:01.594 --rc geninfo_all_blocks=1 00:18:01.594 --rc geninfo_unexecuted_blocks=1 00:18:01.594 00:18:01.594 ' 00:18:01.594 08:09:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:01.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.594 --rc genhtml_branch_coverage=1 00:18:01.594 --rc genhtml_function_coverage=1 00:18:01.594 --rc genhtml_legend=1 00:18:01.594 --rc geninfo_all_blocks=1 00:18:01.594 --rc geninfo_unexecuted_blocks=1 00:18:01.594 00:18:01.594 ' 00:18:01.594 08:09:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:01.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.594 --rc genhtml_branch_coverage=1 00:18:01.594 --rc genhtml_function_coverage=1 00:18:01.594 --rc genhtml_legend=1 00:18:01.594 --rc geninfo_all_blocks=1 00:18:01.594 --rc geninfo_unexecuted_blocks=1 00:18:01.594 00:18:01.594 ' 00:18:01.594 08:09:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:01.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.594 --rc genhtml_branch_coverage=1 00:18:01.594 --rc genhtml_function_coverage=1 00:18:01.594 --rc genhtml_legend=1 00:18:01.594 --rc geninfo_all_blocks=1 00:18:01.594 --rc geninfo_unexecuted_blocks=1 00:18:01.594 00:18:01.594 ' 00:18:01.594 08:09:12 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:01.594 08:09:12 -- nvmf/common.sh@7 -- # uname -s 00:18:01.594 08:09:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:01.594 08:09:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:01.594 08:09:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:01.594 08:09:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:01.594 08:09:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:01.594 08:09:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:01.594 08:09:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:01.594 08:09:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:01.594 08:09:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:01.594 08:09:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:01.594 08:09:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:18:01.594 08:09:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:18:01.594 08:09:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:01.594 08:09:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:01.594 08:09:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:01.594 08:09:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:01.594 08:09:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:01.594 08:09:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:01.594 08:09:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:01.595 08:09:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.595 08:09:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.595 08:09:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.595 08:09:12 -- paths/export.sh@5 -- # export PATH 00:18:01.595 08:09:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.595 08:09:12 -- nvmf/common.sh@46 -- # : 0 00:18:01.595 08:09:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:01.595 08:09:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:01.595 08:09:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:01.595 08:09:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:01.595 08:09:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:01.595 08:09:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:01.595 08:09:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:01.595 08:09:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:01.854 08:09:12 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:01.854 08:09:12 -- fips/fips.sh@89 -- # check_openssl_version 00:18:01.854 08:09:12 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:01.854 08:09:12 -- fips/fips.sh@85 -- # openssl version 00:18:01.854 08:09:12 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:01.854 08:09:12 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:18:01.854 08:09:12 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:01.854 08:09:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:01.854 08:09:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:01.854 08:09:12 -- scripts/common.sh@335 -- # IFS=.-: 00:18:01.854 08:09:12 -- scripts/common.sh@335 -- # read -ra ver1 00:18:01.854 08:09:12 -- scripts/common.sh@336 -- # IFS=.-: 00:18:01.854 08:09:12 -- scripts/common.sh@336 -- # read -ra ver2 00:18:01.854 08:09:12 -- scripts/common.sh@337 -- # local 'op=>=' 00:18:01.854 08:09:12 -- scripts/common.sh@339 -- # ver1_l=3 00:18:01.854 08:09:12 -- scripts/common.sh@340 -- # ver2_l=3 00:18:01.854 08:09:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:01.854 08:09:12 -- scripts/common.sh@343 -- # case "$op" in 00:18:01.854 08:09:12 -- scripts/common.sh@347 -- # : 1 00:18:01.854 08:09:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:01.854 08:09:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.854 08:09:12 -- scripts/common.sh@364 -- # decimal 3 00:18:01.854 08:09:12 -- scripts/common.sh@352 -- # local d=3 00:18:01.854 08:09:12 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:01.854 08:09:12 -- scripts/common.sh@354 -- # echo 3 00:18:01.854 08:09:12 -- scripts/common.sh@364 -- # ver1[v]=3 00:18:01.854 08:09:12 -- scripts/common.sh@365 -- # decimal 3 00:18:01.854 08:09:12 -- scripts/common.sh@352 -- # local d=3 00:18:01.854 08:09:12 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:01.854 08:09:12 -- scripts/common.sh@354 -- # echo 3 00:18:01.854 08:09:12 -- scripts/common.sh@365 -- # ver2[v]=3 00:18:01.854 08:09:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:01.854 08:09:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:01.854 08:09:12 -- scripts/common.sh@363 -- # (( v++ )) 00:18:01.854 08:09:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.854 08:09:12 -- scripts/common.sh@364 -- # decimal 1 00:18:01.854 08:09:12 -- scripts/common.sh@352 -- # local d=1 00:18:01.854 08:09:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:01.854 08:09:12 -- scripts/common.sh@354 -- # echo 1 00:18:01.854 08:09:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:01.854 08:09:12 -- scripts/common.sh@365 -- # decimal 0 00:18:01.854 08:09:12 -- scripts/common.sh@352 -- # local d=0 00:18:01.854 08:09:12 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:01.854 08:09:12 -- scripts/common.sh@354 -- # echo 0 00:18:01.854 08:09:12 -- scripts/common.sh@365 -- # ver2[v]=0 00:18:01.854 08:09:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:01.854 08:09:12 -- scripts/common.sh@366 -- # return 0 00:18:01.854 08:09:12 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:01.854 08:09:12 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:01.854 08:09:12 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:01.854 08:09:12 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:01.854 08:09:12 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:01.854 08:09:12 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:01.854 08:09:12 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:01.854 08:09:12 -- fips/fips.sh@113 -- # build_openssl_config 00:18:01.854 08:09:12 -- fips/fips.sh@37 -- # cat 00:18:01.854 08:09:12 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:01.854 08:09:12 -- fips/fips.sh@58 -- # cat - 00:18:01.854 08:09:12 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:01.854 08:09:12 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:01.854 08:09:12 -- fips/fips.sh@116 -- # mapfile -t providers 00:18:01.854 08:09:12 -- fips/fips.sh@116 -- # openssl list -providers 00:18:01.854 08:09:12 -- fips/fips.sh@116 -- # grep name 00:18:01.854 08:09:12 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:01.854 08:09:12 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:01.854 08:09:12 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:01.854 08:09:12 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:01.854 08:09:12 -- common/autotest_common.sh@650 -- # local es=0 00:18:01.854 08:09:12 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:01.854 08:09:12 -- common/autotest_common.sh@638 -- # local arg=openssl 00:18:01.854 08:09:12 -- fips/fips.sh@127 -- # : 00:18:01.854 08:09:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.854 08:09:12 -- common/autotest_common.sh@642 -- # type -t openssl 00:18:01.854 08:09:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.854 08:09:12 -- common/autotest_common.sh@644 -- # type -P openssl 00:18:01.854 08:09:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:01.854 08:09:12 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:18:01.854 08:09:12 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:18:01.854 08:09:12 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:18:01.854 Error setting digest 00:18:01.854 4042C5F7CE7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:01.854 4042C5F7CE7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:01.854 08:09:13 -- common/autotest_common.sh@653 -- # es=1 00:18:01.854 08:09:13 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:01.854 08:09:13 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:01.854 08:09:13 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:01.854 08:09:13 -- fips/fips.sh@130 -- # nvmftestinit 00:18:01.854 08:09:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:01.854 08:09:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:01.854 08:09:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:01.854 08:09:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:01.854 08:09:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:01.854 08:09:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.854 08:09:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:01.854 08:09:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.854 08:09:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:01.854 08:09:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:01.854 08:09:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:01.854 08:09:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:01.854 08:09:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:01.854 08:09:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:01.854 08:09:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:01.854 08:09:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:01.854 08:09:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:01.854 08:09:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:01.854 08:09:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:01.854 08:09:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:01.854 08:09:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:01.854 08:09:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:01.854 08:09:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:01.854 08:09:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:01.854 08:09:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:01.854 08:09:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:01.854 08:09:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:01.854 08:09:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:01.854 Cannot find device "nvmf_tgt_br" 00:18:01.854 08:09:13 -- nvmf/common.sh@154 -- # true 00:18:01.854 08:09:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:01.854 Cannot find device "nvmf_tgt_br2" 00:18:01.854 08:09:13 -- nvmf/common.sh@155 -- # true 00:18:01.854 08:09:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:01.854 08:09:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:01.854 Cannot find device "nvmf_tgt_br" 00:18:01.854 08:09:13 -- nvmf/common.sh@157 -- # true 00:18:01.854 08:09:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:01.854 Cannot find device "nvmf_tgt_br2" 00:18:01.854 08:09:13 -- nvmf/common.sh@158 -- # true 00:18:01.854 08:09:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:02.113 08:09:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:02.113 08:09:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:02.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:02.113 08:09:13 -- nvmf/common.sh@161 -- # true 00:18:02.113 08:09:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:02.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:02.113 08:09:13 -- nvmf/common.sh@162 -- # true 00:18:02.113 08:09:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:02.113 08:09:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:02.113 08:09:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:02.113 08:09:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:02.113 08:09:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:02.113 08:09:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:02.113 08:09:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:02.113 08:09:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:02.113 08:09:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:02.113 08:09:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:02.113 08:09:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:02.113 08:09:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:02.113 08:09:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:02.113 08:09:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:02.113 08:09:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:02.113 08:09:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:02.113 08:09:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:02.113 08:09:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:02.113 08:09:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:02.113 08:09:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:02.113 08:09:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:02.113 08:09:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:02.113 08:09:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:02.113 08:09:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:02.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:18:02.113 00:18:02.113 --- 10.0.0.2 ping statistics --- 00:18:02.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.113 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:18:02.113 08:09:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:02.113 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:02.113 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:18:02.113 00:18:02.113 --- 10.0.0.3 ping statistics --- 00:18:02.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.113 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:02.113 08:09:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:02.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:18:02.113 00:18:02.113 --- 10.0.0.1 ping statistics --- 00:18:02.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.113 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:02.113 08:09:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.113 08:09:13 -- nvmf/common.sh@421 -- # return 0 00:18:02.113 08:09:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:02.113 08:09:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.113 08:09:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:02.113 08:09:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:02.113 08:09:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.113 08:09:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:02.113 08:09:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:02.370 08:09:13 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:02.370 08:09:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:02.370 08:09:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:02.370 08:09:13 -- common/autotest_common.sh@10 -- # set +x 00:18:02.370 08:09:13 -- nvmf/common.sh@469 -- # nvmfpid=90095 00:18:02.370 08:09:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:02.370 08:09:13 -- nvmf/common.sh@470 -- # waitforlisten 90095 00:18:02.370 08:09:13 -- common/autotest_common.sh@829 -- # '[' -z 90095 ']' 00:18:02.370 08:09:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.370 08:09:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.370 08:09:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.370 08:09:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:02.370 08:09:13 -- common/autotest_common.sh@10 -- # set +x 00:18:02.370 [2024-12-07 08:09:13.482426] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:02.370 [2024-12-07 08:09:13.482515] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.370 [2024-12-07 08:09:13.625675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.627 [2024-12-07 08:09:13.714348] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:02.627 [2024-12-07 08:09:13.714542] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.627 [2024-12-07 08:09:13.714558] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.627 [2024-12-07 08:09:13.714569] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.627 [2024-12-07 08:09:13.714617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.191 08:09:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:03.191 08:09:14 -- common/autotest_common.sh@862 -- # return 0 00:18:03.191 08:09:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:03.191 08:09:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:03.191 08:09:14 -- common/autotest_common.sh@10 -- # set +x 00:18:03.449 08:09:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.449 08:09:14 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:03.449 08:09:14 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:03.449 08:09:14 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:03.449 08:09:14 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:03.449 08:09:14 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:03.449 08:09:14 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:03.449 08:09:14 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:03.449 08:09:14 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.449 [2024-12-07 08:09:14.694276] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.449 [2024-12-07 08:09:14.710193] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:03.449 [2024-12-07 08:09:14.710447] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.706 malloc0 00:18:03.706 08:09:14 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:03.706 08:09:14 -- fips/fips.sh@147 -- # bdevperf_pid=90148 00:18:03.706 08:09:14 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:03.706 08:09:14 -- fips/fips.sh@148 -- # waitforlisten 90148 /var/tmp/bdevperf.sock 00:18:03.706 08:09:14 -- common/autotest_common.sh@829 -- # '[' -z 90148 ']' 00:18:03.706 08:09:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.706 08:09:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.706 08:09:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.706 08:09:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.706 08:09:14 -- common/autotest_common.sh@10 -- # set +x 00:18:03.706 [2024-12-07 08:09:14.849832] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:03.706 [2024-12-07 08:09:14.849975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90148 ] 00:18:03.963 [2024-12-07 08:09:14.993654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.963 [2024-12-07 08:09:15.077483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.528 08:09:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.528 08:09:15 -- common/autotest_common.sh@862 -- # return 0 00:18:04.529 08:09:15 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:04.787 [2024-12-07 08:09:15.937406] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:04.787 TLSTESTn1 00:18:04.787 08:09:16 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:05.044 Running I/O for 10 seconds... 00:18:15.025 00:18:15.025 Latency(us) 00:18:15.025 [2024-12-07T08:09:26.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.025 [2024-12-07T08:09:26.301Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:15.025 Verification LBA range: start 0x0 length 0x2000 00:18:15.025 TLSTESTn1 : 10.01 6142.57 23.99 0.00 0.00 20811.65 1921.40 263097.25 00:18:15.025 [2024-12-07T08:09:26.301Z] =================================================================================================================== 00:18:15.025 [2024-12-07T08:09:26.301Z] Total : 6142.57 23.99 0.00 0.00 20811.65 1921.40 263097.25 00:18:15.025 0 00:18:15.025 08:09:26 -- fips/fips.sh@1 -- # cleanup 00:18:15.025 08:09:26 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:15.025 08:09:26 -- common/autotest_common.sh@806 -- # type=--id 00:18:15.025 08:09:26 -- common/autotest_common.sh@807 -- # id=0 00:18:15.025 08:09:26 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:15.025 08:09:26 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:15.025 08:09:26 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:15.025 08:09:26 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:15.025 08:09:26 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:15.025 08:09:26 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:15.025 nvmf_trace.0 00:18:15.025 08:09:26 -- common/autotest_common.sh@821 -- # return 0 00:18:15.025 08:09:26 -- fips/fips.sh@16 -- # killprocess 90148 00:18:15.025 08:09:26 -- common/autotest_common.sh@936 -- # '[' -z 90148 ']' 00:18:15.025 08:09:26 -- common/autotest_common.sh@940 -- # kill -0 90148 00:18:15.025 08:09:26 -- common/autotest_common.sh@941 -- # uname 00:18:15.025 08:09:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:15.025 08:09:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90148 00:18:15.025 killing process with pid 90148 00:18:15.025 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.025 00:18:15.025 Latency(us) 00:18:15.025 [2024-12-07T08:09:26.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.025 [2024-12-07T08:09:26.301Z] =================================================================================================================== 00:18:15.025 [2024-12-07T08:09:26.301Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:15.025 08:09:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:15.025 08:09:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:15.025 08:09:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90148' 00:18:15.025 08:09:26 -- common/autotest_common.sh@955 -- # kill 90148 00:18:15.025 08:09:26 -- common/autotest_common.sh@960 -- # wait 90148 00:18:15.284 08:09:26 -- fips/fips.sh@17 -- # nvmftestfini 00:18:15.284 08:09:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:15.284 08:09:26 -- nvmf/common.sh@116 -- # sync 00:18:15.284 08:09:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:15.284 08:09:26 -- nvmf/common.sh@119 -- # set +e 00:18:15.284 08:09:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:15.284 08:09:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:15.284 rmmod nvme_tcp 00:18:15.284 rmmod nvme_fabrics 00:18:15.284 rmmod nvme_keyring 00:18:15.543 08:09:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:15.543 08:09:26 -- nvmf/common.sh@123 -- # set -e 00:18:15.543 08:09:26 -- nvmf/common.sh@124 -- # return 0 00:18:15.543 08:09:26 -- nvmf/common.sh@477 -- # '[' -n 90095 ']' 00:18:15.543 08:09:26 -- nvmf/common.sh@478 -- # killprocess 90095 00:18:15.543 08:09:26 -- common/autotest_common.sh@936 -- # '[' -z 90095 ']' 00:18:15.543 08:09:26 -- common/autotest_common.sh@940 -- # kill -0 90095 00:18:15.544 08:09:26 -- common/autotest_common.sh@941 -- # uname 00:18:15.544 08:09:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:15.544 08:09:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90095 00:18:15.544 killing process with pid 90095 00:18:15.544 08:09:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:15.544 08:09:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:15.544 08:09:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90095' 00:18:15.544 08:09:26 -- common/autotest_common.sh@955 -- # kill 90095 00:18:15.544 08:09:26 -- common/autotest_common.sh@960 -- # wait 90095 00:18:15.544 08:09:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:15.544 08:09:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:15.544 08:09:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:15.544 08:09:26 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:15.544 08:09:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:15.544 08:09:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.544 08:09:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.544 08:09:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.804 08:09:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:15.804 08:09:26 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:15.804 ************************************ 00:18:15.804 END TEST nvmf_fips 00:18:15.804 ************************************ 00:18:15.804 00:18:15.804 real 0m14.178s 00:18:15.804 user 0m19.038s 00:18:15.804 sys 0m5.761s 00:18:15.804 08:09:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:15.804 08:09:26 -- common/autotest_common.sh@10 -- # set +x 00:18:15.804 08:09:26 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:18:15.804 08:09:26 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:15.804 08:09:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:15.804 08:09:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:15.804 08:09:26 -- common/autotest_common.sh@10 -- # set +x 00:18:15.804 ************************************ 00:18:15.804 START TEST nvmf_fuzz 00:18:15.804 ************************************ 00:18:15.804 08:09:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:15.804 * Looking for test storage... 00:18:15.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:15.804 08:09:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:15.804 08:09:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:15.804 08:09:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:15.804 08:09:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:15.804 08:09:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:15.804 08:09:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:15.804 08:09:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:15.804 08:09:27 -- scripts/common.sh@335 -- # IFS=.-: 00:18:15.804 08:09:27 -- scripts/common.sh@335 -- # read -ra ver1 00:18:15.804 08:09:27 -- scripts/common.sh@336 -- # IFS=.-: 00:18:15.804 08:09:27 -- scripts/common.sh@336 -- # read -ra ver2 00:18:15.804 08:09:27 -- scripts/common.sh@337 -- # local 'op=<' 00:18:15.804 08:09:27 -- scripts/common.sh@339 -- # ver1_l=2 00:18:15.804 08:09:27 -- scripts/common.sh@340 -- # ver2_l=1 00:18:15.804 08:09:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:15.804 08:09:27 -- scripts/common.sh@343 -- # case "$op" in 00:18:15.804 08:09:27 -- scripts/common.sh@344 -- # : 1 00:18:15.804 08:09:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:15.804 08:09:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:15.804 08:09:27 -- scripts/common.sh@364 -- # decimal 1 00:18:15.804 08:09:27 -- scripts/common.sh@352 -- # local d=1 00:18:15.804 08:09:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:15.804 08:09:27 -- scripts/common.sh@354 -- # echo 1 00:18:15.804 08:09:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:15.804 08:09:27 -- scripts/common.sh@365 -- # decimal 2 00:18:15.804 08:09:27 -- scripts/common.sh@352 -- # local d=2 00:18:15.804 08:09:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:15.804 08:09:27 -- scripts/common.sh@354 -- # echo 2 00:18:16.064 08:09:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:16.064 08:09:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:16.064 08:09:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:16.064 08:09:27 -- scripts/common.sh@367 -- # return 0 00:18:16.064 08:09:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:16.064 08:09:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:16.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.064 --rc genhtml_branch_coverage=1 00:18:16.064 --rc genhtml_function_coverage=1 00:18:16.064 --rc genhtml_legend=1 00:18:16.064 --rc geninfo_all_blocks=1 00:18:16.064 --rc geninfo_unexecuted_blocks=1 00:18:16.064 00:18:16.064 ' 00:18:16.064 08:09:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:16.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.064 --rc genhtml_branch_coverage=1 00:18:16.064 --rc genhtml_function_coverage=1 00:18:16.064 --rc genhtml_legend=1 00:18:16.064 --rc geninfo_all_blocks=1 00:18:16.064 --rc geninfo_unexecuted_blocks=1 00:18:16.064 00:18:16.064 ' 00:18:16.064 08:09:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:16.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.064 --rc genhtml_branch_coverage=1 00:18:16.064 --rc genhtml_function_coverage=1 00:18:16.064 --rc genhtml_legend=1 00:18:16.064 --rc geninfo_all_blocks=1 00:18:16.064 --rc geninfo_unexecuted_blocks=1 00:18:16.064 00:18:16.064 ' 00:18:16.064 08:09:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:16.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.065 --rc genhtml_branch_coverage=1 00:18:16.065 --rc genhtml_function_coverage=1 00:18:16.065 --rc genhtml_legend=1 00:18:16.065 --rc geninfo_all_blocks=1 00:18:16.065 --rc geninfo_unexecuted_blocks=1 00:18:16.065 00:18:16.065 ' 00:18:16.065 08:09:27 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:16.065 08:09:27 -- nvmf/common.sh@7 -- # uname -s 00:18:16.065 08:09:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.065 08:09:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.065 08:09:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.065 08:09:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.065 08:09:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.065 08:09:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.065 08:09:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.065 08:09:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.065 08:09:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.065 08:09:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.065 08:09:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:18:16.065 08:09:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:18:16.065 08:09:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.065 08:09:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.065 08:09:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:16.065 08:09:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:16.065 08:09:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.065 08:09:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.065 08:09:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.065 08:09:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.065 08:09:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.065 08:09:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.065 08:09:27 -- paths/export.sh@5 -- # export PATH 00:18:16.065 08:09:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.065 08:09:27 -- nvmf/common.sh@46 -- # : 0 00:18:16.065 08:09:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:16.065 08:09:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:16.065 08:09:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:16.065 08:09:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.065 08:09:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.065 08:09:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:16.065 08:09:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:16.065 08:09:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:16.065 08:09:27 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:16.065 08:09:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:16.065 08:09:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.065 08:09:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:16.065 08:09:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:16.065 08:09:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:16.065 08:09:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.065 08:09:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.065 08:09:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.065 08:09:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:16.065 08:09:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:16.065 08:09:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:16.065 08:09:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:16.065 08:09:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:16.065 08:09:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:16.065 08:09:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.065 08:09:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.065 08:09:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:16.065 08:09:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:16.065 08:09:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:16.065 08:09:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:16.065 08:09:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:16.065 08:09:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.065 08:09:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:16.065 08:09:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:16.065 08:09:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:16.065 08:09:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:16.065 08:09:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:16.065 08:09:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:16.065 Cannot find device "nvmf_tgt_br" 00:18:16.065 08:09:27 -- nvmf/common.sh@154 -- # true 00:18:16.065 08:09:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:16.065 Cannot find device "nvmf_tgt_br2" 00:18:16.065 08:09:27 -- nvmf/common.sh@155 -- # true 00:18:16.065 08:09:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:16.065 08:09:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:16.065 Cannot find device "nvmf_tgt_br" 00:18:16.065 08:09:27 -- nvmf/common.sh@157 -- # true 00:18:16.065 08:09:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:16.065 Cannot find device "nvmf_tgt_br2" 00:18:16.065 08:09:27 -- nvmf/common.sh@158 -- # true 00:18:16.065 08:09:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:16.065 08:09:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:16.065 08:09:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:16.065 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.065 08:09:27 -- nvmf/common.sh@161 -- # true 00:18:16.065 08:09:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:16.065 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.065 08:09:27 -- nvmf/common.sh@162 -- # true 00:18:16.065 08:09:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:16.065 08:09:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:16.065 08:09:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:16.065 08:09:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:16.065 08:09:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:16.065 08:09:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:16.065 08:09:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:16.065 08:09:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:16.065 08:09:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:16.065 08:09:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:16.065 08:09:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:16.065 08:09:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:16.065 08:09:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:16.065 08:09:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:16.065 08:09:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:16.324 08:09:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:16.324 08:09:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:16.324 08:09:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:16.324 08:09:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:16.324 08:09:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:16.324 08:09:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:16.324 08:09:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:16.324 08:09:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:16.324 08:09:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:16.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:18:16.324 00:18:16.324 --- 10.0.0.2 ping statistics --- 00:18:16.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.324 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:18:16.324 08:09:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:16.324 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:16.324 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:18:16.324 00:18:16.324 --- 10.0.0.3 ping statistics --- 00:18:16.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.324 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:16.324 08:09:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:16.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:18:16.324 00:18:16.324 --- 10.0.0.1 ping statistics --- 00:18:16.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.324 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:18:16.324 08:09:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.324 08:09:27 -- nvmf/common.sh@421 -- # return 0 00:18:16.324 08:09:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:16.324 08:09:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.324 08:09:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:16.324 08:09:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:16.324 08:09:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.324 08:09:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:16.324 08:09:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:16.324 08:09:27 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=90504 00:18:16.324 08:09:27 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:16.324 08:09:27 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:16.324 08:09:27 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 90504 00:18:16.325 08:09:27 -- common/autotest_common.sh@829 -- # '[' -z 90504 ']' 00:18:16.325 08:09:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.325 08:09:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.325 08:09:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.325 08:09:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.325 08:09:27 -- common/autotest_common.sh@10 -- # set +x 00:18:17.261 08:09:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.261 08:09:28 -- common/autotest_common.sh@862 -- # return 0 00:18:17.261 08:09:28 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:17.262 08:09:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.262 08:09:28 -- common/autotest_common.sh@10 -- # set +x 00:18:17.262 08:09:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.262 08:09:28 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:17.262 08:09:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.262 08:09:28 -- common/autotest_common.sh@10 -- # set +x 00:18:17.536 Malloc0 00:18:17.536 08:09:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.536 08:09:28 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:17.536 08:09:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.536 08:09:28 -- common/autotest_common.sh@10 -- # set +x 00:18:17.536 08:09:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.536 08:09:28 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:17.536 08:09:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.536 08:09:28 -- common/autotest_common.sh@10 -- # set +x 00:18:17.536 08:09:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.536 08:09:28 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:17.536 08:09:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.536 08:09:28 -- common/autotest_common.sh@10 -- # set +x 00:18:17.536 08:09:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.536 08:09:28 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:17.536 08:09:28 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:17.815 Shutting down the fuzz application 00:18:17.815 08:09:28 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:18.085 Shutting down the fuzz application 00:18:18.085 08:09:29 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:18.085 08:09:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.085 08:09:29 -- common/autotest_common.sh@10 -- # set +x 00:18:18.085 08:09:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.085 08:09:29 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:18.085 08:09:29 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:18.085 08:09:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:18.085 08:09:29 -- nvmf/common.sh@116 -- # sync 00:18:18.085 08:09:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:18.085 08:09:29 -- nvmf/common.sh@119 -- # set +e 00:18:18.085 08:09:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:18.085 08:09:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:18.085 rmmod nvme_tcp 00:18:18.343 rmmod nvme_fabrics 00:18:18.343 rmmod nvme_keyring 00:18:18.343 08:09:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:18.343 08:09:29 -- nvmf/common.sh@123 -- # set -e 00:18:18.343 08:09:29 -- nvmf/common.sh@124 -- # return 0 00:18:18.343 08:09:29 -- nvmf/common.sh@477 -- # '[' -n 90504 ']' 00:18:18.343 08:09:29 -- nvmf/common.sh@478 -- # killprocess 90504 00:18:18.343 08:09:29 -- common/autotest_common.sh@936 -- # '[' -z 90504 ']' 00:18:18.343 08:09:29 -- common/autotest_common.sh@940 -- # kill -0 90504 00:18:18.343 08:09:29 -- common/autotest_common.sh@941 -- # uname 00:18:18.343 08:09:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.343 08:09:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90504 00:18:18.343 killing process with pid 90504 00:18:18.343 08:09:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:18.343 08:09:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:18.343 08:09:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90504' 00:18:18.343 08:09:29 -- common/autotest_common.sh@955 -- # kill 90504 00:18:18.343 08:09:29 -- common/autotest_common.sh@960 -- # wait 90504 00:18:18.601 08:09:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:18.601 08:09:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:18.601 08:09:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:18.601 08:09:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:18.601 08:09:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:18.601 08:09:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.601 08:09:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.601 08:09:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.601 08:09:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:18.601 08:09:29 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:18.601 00:18:18.601 real 0m2.824s 00:18:18.601 user 0m2.976s 00:18:18.601 sys 0m0.699s 00:18:18.601 08:09:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:18.601 08:09:29 -- common/autotest_common.sh@10 -- # set +x 00:18:18.601 ************************************ 00:18:18.601 END TEST nvmf_fuzz 00:18:18.601 ************************************ 00:18:18.601 08:09:29 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:18.601 08:09:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:18.601 08:09:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:18.601 08:09:29 -- common/autotest_common.sh@10 -- # set +x 00:18:18.601 ************************************ 00:18:18.601 START TEST nvmf_multiconnection 00:18:18.601 ************************************ 00:18:18.601 08:09:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:18.601 * Looking for test storage... 00:18:18.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:18.601 08:09:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:18.601 08:09:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:18.601 08:09:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:18.859 08:09:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:18.860 08:09:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:18.860 08:09:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:18.860 08:09:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:18.860 08:09:29 -- scripts/common.sh@335 -- # IFS=.-: 00:18:18.860 08:09:29 -- scripts/common.sh@335 -- # read -ra ver1 00:18:18.860 08:09:29 -- scripts/common.sh@336 -- # IFS=.-: 00:18:18.860 08:09:29 -- scripts/common.sh@336 -- # read -ra ver2 00:18:18.860 08:09:29 -- scripts/common.sh@337 -- # local 'op=<' 00:18:18.860 08:09:29 -- scripts/common.sh@339 -- # ver1_l=2 00:18:18.860 08:09:29 -- scripts/common.sh@340 -- # ver2_l=1 00:18:18.860 08:09:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:18.860 08:09:29 -- scripts/common.sh@343 -- # case "$op" in 00:18:18.860 08:09:29 -- scripts/common.sh@344 -- # : 1 00:18:18.860 08:09:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:18.860 08:09:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.860 08:09:29 -- scripts/common.sh@364 -- # decimal 1 00:18:18.860 08:09:29 -- scripts/common.sh@352 -- # local d=1 00:18:18.860 08:09:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:18.860 08:09:29 -- scripts/common.sh@354 -- # echo 1 00:18:18.860 08:09:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:18.860 08:09:29 -- scripts/common.sh@365 -- # decimal 2 00:18:18.860 08:09:29 -- scripts/common.sh@352 -- # local d=2 00:18:18.860 08:09:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:18.860 08:09:29 -- scripts/common.sh@354 -- # echo 2 00:18:18.860 08:09:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:18.860 08:09:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:18.860 08:09:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:18.860 08:09:29 -- scripts/common.sh@367 -- # return 0 00:18:18.860 08:09:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:18.860 08:09:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:18.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.860 --rc genhtml_branch_coverage=1 00:18:18.860 --rc genhtml_function_coverage=1 00:18:18.860 --rc genhtml_legend=1 00:18:18.860 --rc geninfo_all_blocks=1 00:18:18.860 --rc geninfo_unexecuted_blocks=1 00:18:18.860 00:18:18.860 ' 00:18:18.860 08:09:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:18.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.860 --rc genhtml_branch_coverage=1 00:18:18.860 --rc genhtml_function_coverage=1 00:18:18.860 --rc genhtml_legend=1 00:18:18.860 --rc geninfo_all_blocks=1 00:18:18.860 --rc geninfo_unexecuted_blocks=1 00:18:18.860 00:18:18.860 ' 00:18:18.860 08:09:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:18.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.860 --rc genhtml_branch_coverage=1 00:18:18.860 --rc genhtml_function_coverage=1 00:18:18.860 --rc genhtml_legend=1 00:18:18.860 --rc geninfo_all_blocks=1 00:18:18.860 --rc geninfo_unexecuted_blocks=1 00:18:18.860 00:18:18.860 ' 00:18:18.860 08:09:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:18.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.860 --rc genhtml_branch_coverage=1 00:18:18.860 --rc genhtml_function_coverage=1 00:18:18.860 --rc genhtml_legend=1 00:18:18.860 --rc geninfo_all_blocks=1 00:18:18.860 --rc geninfo_unexecuted_blocks=1 00:18:18.860 00:18:18.860 ' 00:18:18.860 08:09:29 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:18.860 08:09:29 -- nvmf/common.sh@7 -- # uname -s 00:18:18.860 08:09:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.860 08:09:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.860 08:09:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.860 08:09:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.860 08:09:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.860 08:09:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.860 08:09:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.860 08:09:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.860 08:09:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.860 08:09:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.860 08:09:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:18:18.860 08:09:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:18:18.860 08:09:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.860 08:09:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.860 08:09:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:18.860 08:09:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:18.860 08:09:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.860 08:09:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.860 08:09:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.860 08:09:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.860 08:09:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.860 08:09:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.860 08:09:29 -- paths/export.sh@5 -- # export PATH 00:18:18.860 08:09:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.860 08:09:29 -- nvmf/common.sh@46 -- # : 0 00:18:18.860 08:09:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:18.860 08:09:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:18.860 08:09:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:18.860 08:09:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.860 08:09:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.860 08:09:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:18.860 08:09:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:18.860 08:09:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:18.860 08:09:29 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:18.860 08:09:29 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:18.860 08:09:29 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:18.860 08:09:29 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:18.860 08:09:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:18.860 08:09:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.860 08:09:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:18.860 08:09:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:18.860 08:09:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:18.860 08:09:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.860 08:09:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.860 08:09:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.860 08:09:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:18.860 08:09:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:18.860 08:09:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:18.860 08:09:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:18.860 08:09:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:18.860 08:09:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:18.860 08:09:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.861 08:09:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:18.861 08:09:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:18.861 08:09:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:18.861 08:09:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:18.861 08:09:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:18.861 08:09:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:18.861 08:09:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.861 08:09:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:18.861 08:09:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:18.861 08:09:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:18.861 08:09:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:18.861 08:09:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:18.861 08:09:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:18.861 Cannot find device "nvmf_tgt_br" 00:18:18.861 08:09:30 -- nvmf/common.sh@154 -- # true 00:18:18.861 08:09:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:18.861 Cannot find device "nvmf_tgt_br2" 00:18:18.861 08:09:30 -- nvmf/common.sh@155 -- # true 00:18:18.861 08:09:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:18.861 08:09:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:18.861 Cannot find device "nvmf_tgt_br" 00:18:18.861 08:09:30 -- nvmf/common.sh@157 -- # true 00:18:18.861 08:09:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:18.861 Cannot find device "nvmf_tgt_br2" 00:18:18.861 08:09:30 -- nvmf/common.sh@158 -- # true 00:18:18.861 08:09:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:18.861 08:09:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:18.861 08:09:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:18.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:18.861 08:09:30 -- nvmf/common.sh@161 -- # true 00:18:18.861 08:09:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:18.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:18.861 08:09:30 -- nvmf/common.sh@162 -- # true 00:18:18.861 08:09:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:18.861 08:09:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:19.119 08:09:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:19.119 08:09:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:19.119 08:09:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:19.119 08:09:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:19.119 08:09:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:19.119 08:09:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:19.119 08:09:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:19.119 08:09:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:19.119 08:09:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:19.119 08:09:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:19.119 08:09:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:19.119 08:09:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:19.119 08:09:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:19.119 08:09:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:19.119 08:09:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:19.119 08:09:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:19.119 08:09:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:19.119 08:09:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:19.119 08:09:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:19.119 08:09:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:19.119 08:09:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:19.119 08:09:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:19.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:18:19.119 00:18:19.119 --- 10.0.0.2 ping statistics --- 00:18:19.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.119 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:18:19.119 08:09:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:19.119 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:19.119 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:18:19.119 00:18:19.119 --- 10.0.0.3 ping statistics --- 00:18:19.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.119 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:18:19.119 08:09:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:19.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:18:19.119 00:18:19.119 --- 10.0.0.1 ping statistics --- 00:18:19.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.119 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:18:19.119 08:09:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.119 08:09:30 -- nvmf/common.sh@421 -- # return 0 00:18:19.119 08:09:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:19.119 08:09:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.119 08:09:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:19.119 08:09:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:19.119 08:09:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.119 08:09:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:19.119 08:09:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:19.119 08:09:30 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:19.119 08:09:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:19.119 08:09:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:19.119 08:09:30 -- common/autotest_common.sh@10 -- # set +x 00:18:19.119 08:09:30 -- nvmf/common.sh@469 -- # nvmfpid=90711 00:18:19.119 08:09:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:19.119 08:09:30 -- nvmf/common.sh@470 -- # waitforlisten 90711 00:18:19.119 08:09:30 -- common/autotest_common.sh@829 -- # '[' -z 90711 ']' 00:18:19.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.119 08:09:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.119 08:09:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.119 08:09:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.119 08:09:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.119 08:09:30 -- common/autotest_common.sh@10 -- # set +x 00:18:19.119 [2024-12-07 08:09:30.375397] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:19.119 [2024-12-07 08:09:30.376008] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.376 [2024-12-07 08:09:30.511410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:19.376 [2024-12-07 08:09:30.587460] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:19.376 [2024-12-07 08:09:30.588044] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.376 [2024-12-07 08:09:30.588353] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.376 [2024-12-07 08:09:30.588600] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.376 [2024-12-07 08:09:30.588994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.376 [2024-12-07 08:09:30.589133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.376 [2024-12-07 08:09:30.589247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:19.376 [2024-12-07 08:09:30.589250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.307 08:09:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.307 08:09:31 -- common/autotest_common.sh@862 -- # return 0 00:18:20.307 08:09:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:20.307 08:09:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:20.307 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.307 08:09:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.307 08:09:31 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:20.307 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.307 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.307 [2024-12-07 08:09:31.423384] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.307 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.307 08:09:31 -- target/multiconnection.sh@21 -- # seq 1 11 00:18:20.307 08:09:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.307 08:09:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:20.307 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.307 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.307 Malloc1 00:18:20.307 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.307 08:09:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:20.307 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.307 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.307 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.307 08:09:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:20.307 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.307 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.307 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.307 08:09:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.307 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.307 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.307 [2024-12-07 08:09:31.508831] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.307 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.307 08:09:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.307 08:09:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:20.307 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.307 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.307 Malloc2 00:18:20.307 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.307 08:09:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:20.307 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.307 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.307 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.307 08:09:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:20.307 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.307 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.307 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.307 08:09:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:20.307 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.307 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.307 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.307 08:09:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.307 08:09:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:20.307 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.307 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.565 Malloc3 00:18:20.565 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.565 08:09:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:20.565 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.565 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.565 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.565 08:09:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:20.565 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.565 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.565 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.565 08:09:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:20.565 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.565 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.565 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.565 08:09:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.565 08:09:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:20.565 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.565 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.565 Malloc4 00:18:20.565 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.565 08:09:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:20.565 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.565 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.566 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.566 08:09:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:20.566 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.566 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.566 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.566 08:09:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:20.566 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.566 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.566 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.566 08:09:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.566 08:09:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:20.566 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.566 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.566 Malloc5 00:18:20.566 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.566 08:09:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:20.566 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.566 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.566 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.566 08:09:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:20.566 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.566 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.566 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.566 08:09:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:20.566 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.566 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.566 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.566 08:09:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.566 08:09:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:20.566 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.566 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.566 Malloc6 00:18:20.566 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.566 08:09:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:20.566 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.566 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.566 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.566 08:09:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:20.566 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.566 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.566 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.566 08:09:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:20.566 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.566 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.566 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.566 08:09:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.566 08:09:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:20.566 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.566 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.566 Malloc7 00:18:20.566 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.566 08:09:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:20.566 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.566 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.566 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.566 08:09:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:20.566 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.566 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.566 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.566 08:09:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:20.566 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.566 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.566 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.566 08:09:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.566 08:09:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:20.566 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.566 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.824 Malloc8 00:18:20.824 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.824 08:09:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:20.824 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.824 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.824 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.824 08:09:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:20.824 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.824 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.824 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.824 08:09:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:20.824 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.824 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.824 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.824 08:09:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.824 08:09:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:20.824 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.824 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.825 Malloc9 00:18:20.825 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.825 08:09:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:20.825 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.825 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.825 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.825 08:09:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:20.825 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.825 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.825 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.825 08:09:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:20.825 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.825 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.825 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.825 08:09:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.825 08:09:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:20.825 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.825 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.825 Malloc10 00:18:20.825 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.825 08:09:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:20.825 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.825 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.825 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.825 08:09:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:20.825 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.825 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.825 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.825 08:09:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:20.825 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.825 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.825 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.825 08:09:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.825 08:09:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:20.825 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.825 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.825 Malloc11 00:18:20.825 08:09:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.825 08:09:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:20.825 08:09:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.825 08:09:31 -- common/autotest_common.sh@10 -- # set +x 00:18:20.825 08:09:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.825 08:09:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:20.825 08:09:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.825 08:09:32 -- common/autotest_common.sh@10 -- # set +x 00:18:20.825 08:09:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.825 08:09:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:20.825 08:09:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.825 08:09:32 -- common/autotest_common.sh@10 -- # set +x 00:18:20.825 08:09:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.825 08:09:32 -- target/multiconnection.sh@28 -- # seq 1 11 00:18:20.825 08:09:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.825 08:09:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:21.082 08:09:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:21.082 08:09:32 -- common/autotest_common.sh@1187 -- # local i=0 00:18:21.082 08:09:32 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.082 08:09:32 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:21.082 08:09:32 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:22.979 08:09:34 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:22.979 08:09:34 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:18:22.979 08:09:34 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:22.979 08:09:34 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:22.979 08:09:34 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:22.979 08:09:34 -- common/autotest_common.sh@1197 -- # return 0 00:18:22.979 08:09:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:22.979 08:09:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:23.237 08:09:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:23.237 08:09:34 -- common/autotest_common.sh@1187 -- # local i=0 00:18:23.237 08:09:34 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:23.237 08:09:34 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:23.237 08:09:34 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:25.772 08:09:36 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:25.772 08:09:36 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:25.772 08:09:36 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:18:25.772 08:09:36 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:25.772 08:09:36 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:25.772 08:09:36 -- common/autotest_common.sh@1197 -- # return 0 00:18:25.772 08:09:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:25.772 08:09:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:25.772 08:09:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:25.772 08:09:36 -- common/autotest_common.sh@1187 -- # local i=0 00:18:25.772 08:09:36 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:25.772 08:09:36 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:25.772 08:09:36 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:27.669 08:09:38 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:27.669 08:09:38 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:27.669 08:09:38 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:18:27.669 08:09:38 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:27.669 08:09:38 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:27.669 08:09:38 -- common/autotest_common.sh@1197 -- # return 0 00:18:27.669 08:09:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:27.669 08:09:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:27.669 08:09:38 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:27.669 08:09:38 -- common/autotest_common.sh@1187 -- # local i=0 00:18:27.669 08:09:38 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:27.669 08:09:38 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:27.669 08:09:38 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:29.570 08:09:40 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:29.570 08:09:40 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:29.570 08:09:40 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:18:29.570 08:09:40 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:29.570 08:09:40 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:29.570 08:09:40 -- common/autotest_common.sh@1197 -- # return 0 00:18:29.570 08:09:40 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:29.570 08:09:40 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:29.828 08:09:41 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:29.828 08:09:41 -- common/autotest_common.sh@1187 -- # local i=0 00:18:29.828 08:09:41 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:29.828 08:09:41 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:29.828 08:09:41 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:32.357 08:09:43 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:32.357 08:09:43 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:32.357 08:09:43 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:18:32.357 08:09:43 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:32.357 08:09:43 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:32.357 08:09:43 -- common/autotest_common.sh@1197 -- # return 0 00:18:32.357 08:09:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:32.357 08:09:43 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:32.357 08:09:43 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:32.357 08:09:43 -- common/autotest_common.sh@1187 -- # local i=0 00:18:32.357 08:09:43 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:32.357 08:09:43 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:32.357 08:09:43 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:34.258 08:09:45 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:34.258 08:09:45 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:34.258 08:09:45 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:18:34.258 08:09:45 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:34.258 08:09:45 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:34.258 08:09:45 -- common/autotest_common.sh@1197 -- # return 0 00:18:34.258 08:09:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.258 08:09:45 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:34.258 08:09:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:34.258 08:09:45 -- common/autotest_common.sh@1187 -- # local i=0 00:18:34.258 08:09:45 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:34.258 08:09:45 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:34.258 08:09:45 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:36.157 08:09:47 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:36.420 08:09:47 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:36.420 08:09:47 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:18:36.420 08:09:47 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:36.420 08:09:47 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:36.420 08:09:47 -- common/autotest_common.sh@1197 -- # return 0 00:18:36.420 08:09:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:36.420 08:09:47 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:36.420 08:09:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:36.420 08:09:47 -- common/autotest_common.sh@1187 -- # local i=0 00:18:36.420 08:09:47 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:36.420 08:09:47 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:36.420 08:09:47 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:38.977 08:09:49 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:38.977 08:09:49 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:38.977 08:09:49 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:18:38.977 08:09:49 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:38.977 08:09:49 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:38.977 08:09:49 -- common/autotest_common.sh@1197 -- # return 0 00:18:38.977 08:09:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:38.977 08:09:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:38.977 08:09:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:38.977 08:09:49 -- common/autotest_common.sh@1187 -- # local i=0 00:18:38.977 08:09:49 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:38.977 08:09:49 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:38.977 08:09:49 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:40.873 08:09:51 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:40.873 08:09:51 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:40.873 08:09:51 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:18:40.873 08:09:51 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:40.873 08:09:51 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:40.873 08:09:51 -- common/autotest_common.sh@1197 -- # return 0 00:18:40.873 08:09:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.873 08:09:51 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:40.873 08:09:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:40.873 08:09:52 -- common/autotest_common.sh@1187 -- # local i=0 00:18:40.873 08:09:52 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:40.873 08:09:52 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:40.873 08:09:52 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:42.773 08:09:54 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:43.031 08:09:54 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:43.031 08:09:54 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:18:43.031 08:09:54 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:43.031 08:09:54 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:43.031 08:09:54 -- common/autotest_common.sh@1197 -- # return 0 00:18:43.031 08:09:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.031 08:09:54 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:43.031 08:09:54 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:43.031 08:09:54 -- common/autotest_common.sh@1187 -- # local i=0 00:18:43.031 08:09:54 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:43.031 08:09:54 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:43.031 08:09:54 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:45.560 08:09:56 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:45.560 08:09:56 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:45.560 08:09:56 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:18:45.560 08:09:56 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:45.561 08:09:56 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:45.561 08:09:56 -- common/autotest_common.sh@1197 -- # return 0 00:18:45.561 08:09:56 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:45.561 [global] 00:18:45.561 thread=1 00:18:45.561 invalidate=1 00:18:45.561 rw=read 00:18:45.561 time_based=1 00:18:45.561 runtime=10 00:18:45.561 ioengine=libaio 00:18:45.561 direct=1 00:18:45.561 bs=262144 00:18:45.561 iodepth=64 00:18:45.561 norandommap=1 00:18:45.561 numjobs=1 00:18:45.561 00:18:45.561 [job0] 00:18:45.561 filename=/dev/nvme0n1 00:18:45.561 [job1] 00:18:45.561 filename=/dev/nvme10n1 00:18:45.561 [job2] 00:18:45.561 filename=/dev/nvme1n1 00:18:45.561 [job3] 00:18:45.561 filename=/dev/nvme2n1 00:18:45.561 [job4] 00:18:45.561 filename=/dev/nvme3n1 00:18:45.561 [job5] 00:18:45.561 filename=/dev/nvme4n1 00:18:45.561 [job6] 00:18:45.561 filename=/dev/nvme5n1 00:18:45.561 [job7] 00:18:45.561 filename=/dev/nvme6n1 00:18:45.561 [job8] 00:18:45.561 filename=/dev/nvme7n1 00:18:45.561 [job9] 00:18:45.561 filename=/dev/nvme8n1 00:18:45.561 [job10] 00:18:45.561 filename=/dev/nvme9n1 00:18:45.561 Could not set queue depth (nvme0n1) 00:18:45.561 Could not set queue depth (nvme10n1) 00:18:45.561 Could not set queue depth (nvme1n1) 00:18:45.561 Could not set queue depth (nvme2n1) 00:18:45.561 Could not set queue depth (nvme3n1) 00:18:45.561 Could not set queue depth (nvme4n1) 00:18:45.561 Could not set queue depth (nvme5n1) 00:18:45.561 Could not set queue depth (nvme6n1) 00:18:45.561 Could not set queue depth (nvme7n1) 00:18:45.561 Could not set queue depth (nvme8n1) 00:18:45.561 Could not set queue depth (nvme9n1) 00:18:45.561 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.561 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.561 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.561 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.561 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.561 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.561 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.561 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.561 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.561 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.561 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:45.561 fio-3.35 00:18:45.561 Starting 11 threads 00:18:57.762 00:18:57.762 job0: (groupid=0, jobs=1): err= 0: pid=91194: Sat Dec 7 08:10:06 2024 00:18:57.762 read: IOPS=697, BW=174MiB/s (183MB/s)(1764MiB/10113msec) 00:18:57.762 slat (usec): min=18, max=50782, avg=1393.25, stdev=5037.39 00:18:57.762 clat (msec): min=19, max=253, avg=90.12, stdev=25.54 00:18:57.762 lat (msec): min=19, max=253, avg=91.52, stdev=26.20 00:18:57.762 clat percentiles (msec): 00:18:57.762 | 1.00th=[ 38], 5.00th=[ 55], 10.00th=[ 61], 20.00th=[ 72], 00:18:57.762 | 30.00th=[ 78], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 91], 00:18:57.762 | 70.00th=[ 103], 80.00th=[ 110], 90.00th=[ 120], 95.00th=[ 136], 00:18:57.762 | 99.00th=[ 161], 99.50th=[ 182], 99.90th=[ 253], 99.95th=[ 253], 00:18:57.762 | 99.99th=[ 255] 00:18:57.762 bw ( KiB/s): min=108032, max=260617, per=9.61%, avg=179014.90, stdev=41333.52, samples=20 00:18:57.762 iops : min= 422, max= 1018, avg=699.25, stdev=161.45, samples=20 00:18:57.762 lat (msec) : 20=0.07%, 50=2.08%, 100=65.66%, 250=32.00%, 500=0.18% 00:18:57.762 cpu : usr=0.25%, sys=2.16%, ctx=1203, majf=0, minf=4097 00:18:57.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:57.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.762 issued rwts: total=7056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.762 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.762 job1: (groupid=0, jobs=1): err= 0: pid=91195: Sat Dec 7 08:10:06 2024 00:18:57.762 read: IOPS=670, BW=168MiB/s (176MB/s)(1686MiB/10062msec) 00:18:57.762 slat (usec): min=16, max=50487, avg=1411.48, stdev=4939.88 00:18:57.762 clat (msec): min=15, max=170, avg=93.92, stdev=22.08 00:18:57.762 lat (msec): min=20, max=175, avg=95.33, stdev=22.73 00:18:57.762 clat percentiles (msec): 00:18:57.762 | 1.00th=[ 42], 5.00th=[ 58], 10.00th=[ 65], 20.00th=[ 77], 00:18:57.762 | 30.00th=[ 82], 40.00th=[ 87], 50.00th=[ 93], 60.00th=[ 101], 00:18:57.762 | 70.00th=[ 108], 80.00th=[ 114], 90.00th=[ 124], 95.00th=[ 128], 00:18:57.762 | 99.00th=[ 142], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 169], 00:18:57.762 | 99.99th=[ 171] 00:18:57.762 bw ( KiB/s): min=130308, max=246765, per=9.17%, avg=170947.20, stdev=32733.45, samples=20 00:18:57.762 iops : min= 509, max= 963, avg=667.45, stdev=127.89, samples=20 00:18:57.762 lat (msec) : 20=0.01%, 50=1.96%, 100=58.27%, 250=39.76% 00:18:57.762 cpu : usr=0.22%, sys=2.09%, ctx=1383, majf=0, minf=4097 00:18:57.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:57.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.762 issued rwts: total=6743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.762 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.762 job2: (groupid=0, jobs=1): err= 0: pid=91196: Sat Dec 7 08:10:06 2024 00:18:57.762 read: IOPS=543, BW=136MiB/s (142MB/s)(1374MiB/10114msec) 00:18:57.762 slat (usec): min=19, max=105068, avg=1804.15, stdev=7049.65 00:18:57.762 clat (msec): min=11, max=253, avg=115.82, stdev=27.42 00:18:57.762 lat (msec): min=12, max=253, avg=117.63, stdev=28.38 00:18:57.762 clat percentiles (msec): 00:18:57.762 | 1.00th=[ 37], 5.00th=[ 80], 10.00th=[ 86], 20.00th=[ 94], 00:18:57.762 | 30.00th=[ 103], 40.00th=[ 109], 50.00th=[ 114], 60.00th=[ 122], 00:18:57.762 | 70.00th=[ 131], 80.00th=[ 138], 90.00th=[ 146], 95.00th=[ 155], 00:18:57.762 | 99.00th=[ 215], 99.50th=[ 234], 99.90th=[ 247], 99.95th=[ 247], 00:18:57.762 | 99.99th=[ 253] 00:18:57.762 bw ( KiB/s): min=101376, max=184832, per=7.46%, avg=138980.75, stdev=24772.72, samples=20 00:18:57.762 iops : min= 396, max= 722, avg=542.80, stdev=96.78, samples=20 00:18:57.762 lat (msec) : 20=0.27%, 50=1.20%, 100=25.92%, 250=72.59%, 500=0.02% 00:18:57.762 cpu : usr=0.22%, sys=1.61%, ctx=1158, majf=0, minf=4097 00:18:57.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:57.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.762 issued rwts: total=5494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.762 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.762 job3: (groupid=0, jobs=1): err= 0: pid=91197: Sat Dec 7 08:10:06 2024 00:18:57.762 read: IOPS=763, BW=191MiB/s (200MB/s)(1922MiB/10068msec) 00:18:57.762 slat (usec): min=16, max=51468, avg=1267.45, stdev=4564.19 00:18:57.762 clat (msec): min=22, max=143, avg=82.41, stdev=16.69 00:18:57.762 lat (msec): min=23, max=144, avg=83.67, stdev=17.32 00:18:57.762 clat percentiles (msec): 00:18:57.762 | 1.00th=[ 36], 5.00th=[ 54], 10.00th=[ 61], 20.00th=[ 71], 00:18:57.762 | 30.00th=[ 77], 40.00th=[ 82], 50.00th=[ 84], 60.00th=[ 87], 00:18:57.762 | 70.00th=[ 90], 80.00th=[ 94], 90.00th=[ 102], 95.00th=[ 111], 00:18:57.762 | 99.00th=[ 124], 99.50th=[ 126], 99.90th=[ 138], 99.95th=[ 144], 00:18:57.762 | 99.99th=[ 144] 00:18:57.762 bw ( KiB/s): min=135168, max=278528, per=10.47%, avg=195167.50, stdev=31429.12, samples=20 00:18:57.762 iops : min= 528, max= 1088, avg=762.35, stdev=122.77, samples=20 00:18:57.762 lat (msec) : 50=3.88%, 100=85.43%, 250=10.69% 00:18:57.762 cpu : usr=0.32%, sys=2.60%, ctx=1408, majf=0, minf=4097 00:18:57.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:57.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.762 issued rwts: total=7687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.762 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.762 job4: (groupid=0, jobs=1): err= 0: pid=91198: Sat Dec 7 08:10:06 2024 00:18:57.762 read: IOPS=862, BW=216MiB/s (226MB/s)(2159MiB/10013msec) 00:18:57.762 slat (usec): min=20, max=96209, avg=1143.53, stdev=5501.86 00:18:57.762 clat (msec): min=3, max=228, avg=72.95, stdev=48.81 00:18:57.762 lat (msec): min=3, max=230, avg=74.10, stdev=49.82 00:18:57.762 clat percentiles (msec): 00:18:57.762 | 1.00th=[ 11], 5.00th=[ 20], 10.00th=[ 23], 20.00th=[ 28], 00:18:57.762 | 30.00th=[ 31], 40.00th=[ 34], 50.00th=[ 39], 60.00th=[ 106], 00:18:57.762 | 70.00th=[ 115], 80.00th=[ 125], 90.00th=[ 140], 95.00th=[ 146], 00:18:57.762 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 178], 99.95th=[ 201], 00:18:57.762 | 99.99th=[ 228] 00:18:57.762 bw ( KiB/s): min=104448, max=559009, per=10.91%, avg=203296.84, stdev=159999.36, samples=19 00:18:57.762 iops : min= 408, max= 2183, avg=794.05, stdev=624.93, samples=19 00:18:57.762 lat (msec) : 4=0.03%, 10=0.63%, 20=4.52%, 50=47.83%, 100=2.71% 00:18:57.762 lat (msec) : 250=44.28% 00:18:57.762 cpu : usr=0.35%, sys=2.60%, ctx=1557, majf=0, minf=4097 00:18:57.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:57.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.763 issued rwts: total=8634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.763 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.763 job5: (groupid=0, jobs=1): err= 0: pid=91199: Sat Dec 7 08:10:06 2024 00:18:57.763 read: IOPS=544, BW=136MiB/s (143MB/s)(1377MiB/10109msec) 00:18:57.763 slat (usec): min=20, max=239671, avg=1803.99, stdev=7327.07 00:18:57.763 clat (msec): min=21, max=360, avg=115.44, stdev=34.72 00:18:57.763 lat (msec): min=21, max=360, avg=117.25, stdev=35.65 00:18:57.763 clat percentiles (msec): 00:18:57.763 | 1.00th=[ 38], 5.00th=[ 56], 10.00th=[ 63], 20.00th=[ 92], 00:18:57.763 | 30.00th=[ 104], 40.00th=[ 111], 50.00th=[ 117], 60.00th=[ 125], 00:18:57.763 | 70.00th=[ 134], 80.00th=[ 140], 90.00th=[ 148], 95.00th=[ 155], 00:18:57.763 | 99.00th=[ 257], 99.50th=[ 284], 99.90th=[ 296], 99.95th=[ 296], 00:18:57.763 | 99.99th=[ 363] 00:18:57.763 bw ( KiB/s): min=99840, max=267264, per=7.47%, avg=139274.90, stdev=36926.87, samples=20 00:18:57.763 iops : min= 390, max= 1044, avg=543.80, stdev=144.18, samples=20 00:18:57.763 lat (msec) : 50=1.22%, 100=26.02%, 250=71.66%, 500=1.11% 00:18:57.763 cpu : usr=0.15%, sys=2.07%, ctx=1007, majf=0, minf=4097 00:18:57.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:57.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.763 issued rwts: total=5508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.763 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.763 job6: (groupid=0, jobs=1): err= 0: pid=91200: Sat Dec 7 08:10:06 2024 00:18:57.763 read: IOPS=698, BW=175MiB/s (183MB/s)(1764MiB/10107msec) 00:18:57.763 slat (usec): min=17, max=64048, avg=1383.08, stdev=5052.47 00:18:57.763 clat (msec): min=27, max=268, avg=90.08, stdev=21.20 00:18:57.763 lat (msec): min=29, max=268, avg=91.46, stdev=21.86 00:18:57.763 clat percentiles (msec): 00:18:57.763 | 1.00th=[ 51], 5.00th=[ 66], 10.00th=[ 71], 20.00th=[ 78], 00:18:57.763 | 30.00th=[ 81], 40.00th=[ 84], 50.00th=[ 87], 60.00th=[ 90], 00:18:57.763 | 70.00th=[ 93], 80.00th=[ 99], 90.00th=[ 113], 95.00th=[ 131], 00:18:57.763 | 99.00th=[ 163], 99.50th=[ 182], 99.90th=[ 249], 99.95th=[ 249], 00:18:57.763 | 99.99th=[ 271] 00:18:57.763 bw ( KiB/s): min=101684, max=233984, per=9.60%, avg=178984.40, stdev=29655.99, samples=20 00:18:57.763 iops : min= 397, max= 914, avg=698.95, stdev=115.85, samples=20 00:18:57.763 lat (msec) : 50=0.91%, 100=80.25%, 250=18.82%, 500=0.03% 00:18:57.763 cpu : usr=0.16%, sys=2.27%, ctx=1249, majf=0, minf=4097 00:18:57.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:57.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.763 issued rwts: total=7057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.763 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.763 job7: (groupid=0, jobs=1): err= 0: pid=91201: Sat Dec 7 08:10:06 2024 00:18:57.763 read: IOPS=703, BW=176MiB/s (185MB/s)(1769MiB/10053msec) 00:18:57.763 slat (usec): min=15, max=57852, avg=1391.75, stdev=5005.32 00:18:57.763 clat (msec): min=17, max=180, avg=89.36, stdev=23.34 00:18:57.763 lat (msec): min=17, max=180, avg=90.75, stdev=24.09 00:18:57.763 clat percentiles (msec): 00:18:57.763 | 1.00th=[ 26], 5.00th=[ 52], 10.00th=[ 59], 20.00th=[ 73], 00:18:57.763 | 30.00th=[ 80], 40.00th=[ 84], 50.00th=[ 88], 60.00th=[ 94], 00:18:57.763 | 70.00th=[ 104], 80.00th=[ 112], 90.00th=[ 121], 95.00th=[ 126], 00:18:57.763 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 148], 99.95th=[ 148], 00:18:57.763 | 99.99th=[ 180] 00:18:57.763 bw ( KiB/s): min=121074, max=274944, per=9.63%, avg=179404.60, stdev=41203.62, samples=20 00:18:57.763 iops : min= 472, max= 1074, avg=700.50, stdev=161.02, samples=20 00:18:57.763 lat (msec) : 20=0.28%, 50=4.51%, 100=61.36%, 250=33.85% 00:18:57.763 cpu : usr=0.28%, sys=1.97%, ctx=1198, majf=0, minf=4097 00:18:57.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:57.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.763 issued rwts: total=7076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.763 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.763 job8: (groupid=0, jobs=1): err= 0: pid=91202: Sat Dec 7 08:10:06 2024 00:18:57.763 read: IOPS=595, BW=149MiB/s (156MB/s)(1499MiB/10061msec) 00:18:57.763 slat (usec): min=19, max=82684, avg=1621.50, stdev=5861.92 00:18:57.763 clat (msec): min=39, max=197, avg=105.63, stdev=26.12 00:18:57.763 lat (msec): min=39, max=204, avg=107.25, stdev=26.94 00:18:57.763 clat percentiles (msec): 00:18:57.763 | 1.00th=[ 57], 5.00th=[ 69], 10.00th=[ 77], 20.00th=[ 84], 00:18:57.763 | 30.00th=[ 88], 40.00th=[ 94], 50.00th=[ 100], 60.00th=[ 109], 00:18:57.763 | 70.00th=[ 120], 80.00th=[ 136], 90.00th=[ 144], 95.00th=[ 150], 00:18:57.763 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 184], 99.95th=[ 184], 00:18:57.763 | 99.99th=[ 199] 00:18:57.763 bw ( KiB/s): min=107222, max=217600, per=8.15%, avg=151794.65, stdev=34208.69, samples=20 00:18:57.763 iops : min= 418, max= 850, avg=592.65, stdev=133.61, samples=20 00:18:57.763 lat (msec) : 50=0.37%, 100=50.07%, 250=49.57% 00:18:57.763 cpu : usr=0.19%, sys=1.69%, ctx=1160, majf=0, minf=4097 00:18:57.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:57.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.763 issued rwts: total=5994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.763 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.763 job9: (groupid=0, jobs=1): err= 0: pid=91203: Sat Dec 7 08:10:06 2024 00:18:57.763 read: IOPS=714, BW=179MiB/s (187MB/s)(1794MiB/10045msec) 00:18:57.763 slat (usec): min=20, max=57105, avg=1362.97, stdev=4913.22 00:18:57.763 clat (msec): min=16, max=164, avg=88.15, stdev=23.35 00:18:57.763 lat (msec): min=19, max=175, avg=89.51, stdev=24.02 00:18:57.763 clat percentiles (msec): 00:18:57.763 | 1.00th=[ 29], 5.00th=[ 46], 10.00th=[ 59], 20.00th=[ 71], 00:18:57.763 | 30.00th=[ 79], 40.00th=[ 84], 50.00th=[ 88], 60.00th=[ 92], 00:18:57.763 | 70.00th=[ 100], 80.00th=[ 111], 90.00th=[ 121], 95.00th=[ 125], 00:18:57.763 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 150], 99.95th=[ 155], 00:18:57.763 | 99.99th=[ 165] 00:18:57.763 bw ( KiB/s): min=128766, max=311296, per=9.76%, avg=181950.75, stdev=44853.86, samples=20 00:18:57.763 iops : min= 502, max= 1216, avg=710.60, stdev=175.33, samples=20 00:18:57.763 lat (msec) : 20=0.08%, 50=5.98%, 100=64.57%, 250=29.37% 00:18:57.763 cpu : usr=0.32%, sys=2.49%, ctx=1345, majf=0, minf=4097 00:18:57.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:57.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.763 issued rwts: total=7174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.763 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.763 job10: (groupid=0, jobs=1): err= 0: pid=91204: Sat Dec 7 08:10:06 2024 00:18:57.763 read: IOPS=514, BW=129MiB/s (135MB/s)(1301MiB/10114msec) 00:18:57.763 slat (usec): min=19, max=71284, avg=1921.14, stdev=6513.13 00:18:57.763 clat (msec): min=25, max=248, avg=122.23, stdev=24.20 00:18:57.763 lat (msec): min=25, max=276, avg=124.15, stdev=25.22 00:18:57.763 clat percentiles (msec): 00:18:57.763 | 1.00th=[ 77], 5.00th=[ 88], 10.00th=[ 92], 20.00th=[ 102], 00:18:57.763 | 30.00th=[ 110], 40.00th=[ 116], 50.00th=[ 122], 60.00th=[ 128], 00:18:57.763 | 70.00th=[ 138], 80.00th=[ 144], 90.00th=[ 153], 95.00th=[ 157], 00:18:57.763 | 99.00th=[ 176], 99.50th=[ 211], 99.90th=[ 249], 99.95th=[ 249], 00:18:57.763 | 99.99th=[ 249] 00:18:57.763 bw ( KiB/s): min=103424, max=183296, per=7.06%, avg=131557.65, stdev=21245.88, samples=20 00:18:57.763 iops : min= 404, max= 716, avg=513.85, stdev=82.98, samples=20 00:18:57.763 lat (msec) : 50=0.61%, 100=18.06%, 250=81.32% 00:18:57.763 cpu : usr=0.19%, sys=1.54%, ctx=1034, majf=0, minf=4097 00:18:57.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:57.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:57.763 issued rwts: total=5204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.763 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:57.763 00:18:57.763 Run status group 0 (all jobs): 00:18:57.763 READ: bw=1820MiB/s (1908MB/s), 129MiB/s-216MiB/s (135MB/s-226MB/s), io=18.0GiB (19.3GB), run=10013-10114msec 00:18:57.763 00:18:57.763 Disk stats (read/write): 00:18:57.763 nvme0n1: ios=14012/0, merge=0/0, ticks=1238607/0, in_queue=1238607, util=97.52% 00:18:57.763 nvme10n1: ios=13363/0, merge=0/0, ticks=1239750/0, in_queue=1239750, util=97.51% 00:18:57.763 nvme1n1: ios=10862/0, merge=0/0, ticks=1235621/0, in_queue=1235621, util=97.95% 00:18:57.763 nvme2n1: ios=15257/0, merge=0/0, ticks=1240345/0, in_queue=1240345, util=98.15% 00:18:57.763 nvme3n1: ios=17268/0, merge=0/0, ticks=1239040/0, in_queue=1239040, util=97.97% 00:18:57.763 nvme4n1: ios=10889/0, merge=0/0, ticks=1237613/0, in_queue=1237613, util=98.22% 00:18:57.763 nvme5n1: ios=13993/0, merge=0/0, ticks=1235710/0, in_queue=1235710, util=98.32% 00:18:57.763 nvme6n1: ios=14024/0, merge=0/0, ticks=1239547/0, in_queue=1239547, util=98.26% 00:18:57.763 nvme7n1: ios=11897/0, merge=0/0, ticks=1241346/0, in_queue=1241346, util=98.35% 00:18:57.763 nvme8n1: ios=14221/0, merge=0/0, ticks=1242790/0, in_queue=1242790, util=98.88% 00:18:57.763 nvme9n1: ios=10282/0, merge=0/0, ticks=1236391/0, in_queue=1236391, util=98.76% 00:18:57.763 08:10:06 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:57.763 [global] 00:18:57.763 thread=1 00:18:57.763 invalidate=1 00:18:57.763 rw=randwrite 00:18:57.763 time_based=1 00:18:57.763 runtime=10 00:18:57.763 ioengine=libaio 00:18:57.763 direct=1 00:18:57.763 bs=262144 00:18:57.763 iodepth=64 00:18:57.763 norandommap=1 00:18:57.763 numjobs=1 00:18:57.763 00:18:57.763 [job0] 00:18:57.763 filename=/dev/nvme0n1 00:18:57.763 [job1] 00:18:57.763 filename=/dev/nvme10n1 00:18:57.763 [job2] 00:18:57.763 filename=/dev/nvme1n1 00:18:57.763 [job3] 00:18:57.763 filename=/dev/nvme2n1 00:18:57.763 [job4] 00:18:57.763 filename=/dev/nvme3n1 00:18:57.763 [job5] 00:18:57.763 filename=/dev/nvme4n1 00:18:57.764 [job6] 00:18:57.764 filename=/dev/nvme5n1 00:18:57.764 [job7] 00:18:57.764 filename=/dev/nvme6n1 00:18:57.764 [job8] 00:18:57.764 filename=/dev/nvme7n1 00:18:57.764 [job9] 00:18:57.764 filename=/dev/nvme8n1 00:18:57.764 [job10] 00:18:57.764 filename=/dev/nvme9n1 00:18:57.764 Could not set queue depth (nvme0n1) 00:18:57.764 Could not set queue depth (nvme10n1) 00:18:57.764 Could not set queue depth (nvme1n1) 00:18:57.764 Could not set queue depth (nvme2n1) 00:18:57.764 Could not set queue depth (nvme3n1) 00:18:57.764 Could not set queue depth (nvme4n1) 00:18:57.764 Could not set queue depth (nvme5n1) 00:18:57.764 Could not set queue depth (nvme6n1) 00:18:57.764 Could not set queue depth (nvme7n1) 00:18:57.764 Could not set queue depth (nvme8n1) 00:18:57.764 Could not set queue depth (nvme9n1) 00:18:57.764 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.764 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.764 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.764 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.764 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.764 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.764 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.764 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.764 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.764 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.764 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:57.764 fio-3.35 00:18:57.764 Starting 11 threads 00:19:07.766 00:19:07.766 job0: (groupid=0, jobs=1): err= 0: pid=91405: Sat Dec 7 08:10:17 2024 00:19:07.766 write: IOPS=421, BW=105MiB/s (111MB/s)(1068MiB/10129msec); 0 zone resets 00:19:07.766 slat (usec): min=22, max=14269, avg=2313.42, stdev=4044.68 00:19:07.766 clat (msec): min=14, max=276, avg=149.38, stdev=23.02 00:19:07.766 lat (msec): min=14, max=276, avg=151.69, stdev=23.05 00:19:07.766 clat percentiles (msec): 00:19:07.766 | 1.00th=[ 69], 5.00th=[ 110], 10.00th=[ 117], 20.00th=[ 142], 00:19:07.766 | 30.00th=[ 148], 40.00th=[ 150], 50.00th=[ 150], 60.00th=[ 150], 00:19:07.766 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 182], 95.00th=[ 186], 00:19:07.766 | 99.00th=[ 190], 99.50th=[ 230], 99.90th=[ 268], 99.95th=[ 268], 00:19:07.766 | 99.99th=[ 279] 00:19:07.766 bw ( KiB/s): min=88064, max=142848, per=6.32%, avg=107759.20, stdev=12099.40, samples=20 00:19:07.766 iops : min= 344, max= 558, avg=420.90, stdev=47.32, samples=20 00:19:07.766 lat (msec) : 20=0.16%, 50=0.56%, 100=0.91%, 250=98.13%, 500=0.23% 00:19:07.766 cpu : usr=0.85%, sys=1.05%, ctx=5269, majf=0, minf=1 00:19:07.766 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:19:07.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.766 issued rwts: total=0,4272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.766 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.766 job1: (groupid=0, jobs=1): err= 0: pid=91414: Sat Dec 7 08:10:17 2024 00:19:07.766 write: IOPS=511, BW=128MiB/s (134MB/s)(1291MiB/10101msec); 0 zone resets 00:19:07.766 slat (usec): min=18, max=37454, avg=1900.11, stdev=3461.52 00:19:07.766 clat (msec): min=40, max=212, avg=123.29, stdev=28.97 00:19:07.766 lat (msec): min=40, max=212, avg=125.19, stdev=29.25 00:19:07.766 clat percentiles (msec): 00:19:07.766 | 1.00th=[ 79], 5.00th=[ 105], 10.00th=[ 106], 20.00th=[ 108], 00:19:07.766 | 30.00th=[ 111], 40.00th=[ 112], 50.00th=[ 113], 60.00th=[ 113], 00:19:07.766 | 70.00th=[ 114], 80.00th=[ 120], 90.00th=[ 184], 95.00th=[ 188], 00:19:07.766 | 99.00th=[ 192], 99.50th=[ 194], 99.90th=[ 205], 99.95th=[ 205], 00:19:07.766 | 99.99th=[ 213] 00:19:07.766 bw ( KiB/s): min=85504, max=147456, per=7.65%, avg=130516.75, stdev=25593.22, samples=20 00:19:07.766 iops : min= 334, max= 576, avg=509.80, stdev=100.03, samples=20 00:19:07.766 lat (msec) : 50=0.14%, 100=2.19%, 250=97.68% 00:19:07.766 cpu : usr=0.98%, sys=1.30%, ctx=3615, majf=0, minf=1 00:19:07.766 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:07.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.766 issued rwts: total=0,5162,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.766 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.766 job2: (groupid=0, jobs=1): err= 0: pid=91418: Sat Dec 7 08:10:17 2024 00:19:07.766 write: IOPS=834, BW=209MiB/s (219MB/s)(2100MiB/10067msec); 0 zone resets 00:19:07.766 slat (usec): min=17, max=9191, avg=1186.22, stdev=1998.52 00:19:07.766 clat (msec): min=11, max=138, avg=75.48, stdev= 5.00 00:19:07.766 lat (msec): min=11, max=138, avg=76.67, stdev= 4.70 00:19:07.766 clat percentiles (msec): 00:19:07.766 | 1.00th=[ 70], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 73], 00:19:07.766 | 30.00th=[ 75], 40.00th=[ 75], 50.00th=[ 77], 60.00th=[ 77], 00:19:07.766 | 70.00th=[ 78], 80.00th=[ 79], 90.00th=[ 80], 95.00th=[ 80], 00:19:07.766 | 99.00th=[ 81], 99.50th=[ 87], 99.90th=[ 130], 99.95th=[ 134], 00:19:07.766 | 99.99th=[ 138] 00:19:07.766 bw ( KiB/s): min=204391, max=218112, per=12.51%, avg=213411.55, stdev=3917.03, samples=20 00:19:07.766 iops : min= 798, max= 852, avg=833.60, stdev=15.38, samples=20 00:19:07.766 lat (msec) : 20=0.10%, 50=0.29%, 100=99.21%, 250=0.40% 00:19:07.766 cpu : usr=1.42%, sys=1.96%, ctx=9071, majf=0, minf=1 00:19:07.766 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:07.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.766 issued rwts: total=0,8401,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.766 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.766 job3: (groupid=0, jobs=1): err= 0: pid=91419: Sat Dec 7 08:10:17 2024 00:19:07.766 write: IOPS=394, BW=98.7MiB/s (103MB/s)(999MiB/10120msec); 0 zone resets 00:19:07.766 slat (usec): min=18, max=108354, avg=2497.62, stdev=4698.93 00:19:07.766 clat (msec): min=110, max=270, avg=159.57, stdev=22.83 00:19:07.766 lat (msec): min=111, max=270, avg=162.06, stdev=22.71 00:19:07.766 clat percentiles (msec): 00:19:07.766 | 1.00th=[ 138], 5.00th=[ 140], 10.00th=[ 142], 20.00th=[ 146], 00:19:07.766 | 30.00th=[ 150], 40.00th=[ 150], 50.00th=[ 150], 60.00th=[ 153], 00:19:07.766 | 70.00th=[ 153], 80.00th=[ 184], 90.00th=[ 201], 95.00th=[ 205], 00:19:07.766 | 99.00th=[ 228], 99.50th=[ 243], 99.90th=[ 262], 99.95th=[ 271], 00:19:07.766 | 99.99th=[ 271] 00:19:07.766 bw ( KiB/s): min=69632, max=110592, per=5.90%, avg=100651.40, stdev=13541.19, samples=20 00:19:07.766 iops : min= 272, max= 432, avg=393.15, stdev=52.93, samples=20 00:19:07.766 lat (msec) : 250=99.67%, 500=0.33% 00:19:07.766 cpu : usr=0.86%, sys=1.12%, ctx=5139, majf=0, minf=1 00:19:07.766 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:07.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.766 issued rwts: total=0,3995,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.766 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.766 job4: (groupid=0, jobs=1): err= 0: pid=91420: Sat Dec 7 08:10:17 2024 00:19:07.766 write: IOPS=398, BW=99.6MiB/s (104MB/s)(1009MiB/10127msec); 0 zone resets 00:19:07.766 slat (usec): min=22, max=42747, avg=2474.46, stdev=4398.48 00:19:07.766 clat (msec): min=45, max=272, avg=158.06, stdev=20.68 00:19:07.766 lat (msec): min=45, max=272, avg=160.53, stdev=20.53 00:19:07.766 clat percentiles (msec): 00:19:07.766 | 1.00th=[ 138], 5.00th=[ 140], 10.00th=[ 142], 20.00th=[ 146], 00:19:07.766 | 30.00th=[ 150], 40.00th=[ 150], 50.00th=[ 150], 60.00th=[ 150], 00:19:07.767 | 70.00th=[ 153], 80.00th=[ 182], 90.00th=[ 194], 95.00th=[ 201], 00:19:07.767 | 99.00th=[ 203], 99.50th=[ 224], 99.90th=[ 264], 99.95th=[ 264], 00:19:07.767 | 99.99th=[ 271] 00:19:07.767 bw ( KiB/s): min=75776, max=110592, per=5.96%, avg=101692.00, stdev=11854.88, samples=20 00:19:07.767 iops : min= 296, max= 432, avg=397.20, stdev=46.36, samples=20 00:19:07.767 lat (msec) : 50=0.10%, 100=0.30%, 250=99.36%, 500=0.25% 00:19:07.767 cpu : usr=0.66%, sys=1.07%, ctx=5046, majf=0, minf=1 00:19:07.767 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:07.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.767 issued rwts: total=0,4036,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.767 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.767 job5: (groupid=0, jobs=1): err= 0: pid=91421: Sat Dec 7 08:10:17 2024 00:19:07.767 write: IOPS=799, BW=200MiB/s (210MB/s)(2012MiB/10067msec); 0 zone resets 00:19:07.767 slat (usec): min=19, max=13436, avg=1237.82, stdev=2105.44 00:19:07.767 clat (msec): min=16, max=144, avg=78.81, stdev= 9.18 00:19:07.767 lat (msec): min=17, max=144, avg=80.05, stdev= 9.12 00:19:07.767 clat percentiles (msec): 00:19:07.767 | 1.00th=[ 71], 5.00th=[ 73], 10.00th=[ 73], 20.00th=[ 74], 00:19:07.767 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 79], 60.00th=[ 79], 00:19:07.767 | 70.00th=[ 80], 80.00th=[ 80], 90.00th=[ 81], 95.00th=[ 106], 00:19:07.767 | 99.00th=[ 117], 99.50th=[ 117], 99.90th=[ 136], 99.95th=[ 140], 00:19:07.767 | 99.99th=[ 144] 00:19:07.767 bw ( KiB/s): min=139543, max=214016, per=11.98%, avg=204340.85, stdev=17659.31, samples=20 00:19:07.767 iops : min= 545, max= 836, avg=798.15, stdev=69.06, samples=20 00:19:07.767 lat (msec) : 20=0.05%, 50=0.25%, 100=94.32%, 250=5.38% 00:19:07.767 cpu : usr=1.51%, sys=2.03%, ctx=8022, majf=0, minf=1 00:19:07.767 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:07.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.767 issued rwts: total=0,8046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.767 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.767 job6: (groupid=0, jobs=1): err= 0: pid=91422: Sat Dec 7 08:10:17 2024 00:19:07.767 write: IOPS=607, BW=152MiB/s (159MB/s)(1536MiB/10110msec); 0 zone resets 00:19:07.767 slat (usec): min=20, max=54670, avg=1597.44, stdev=2938.97 00:19:07.767 clat (msec): min=3, max=216, avg=103.67, stdev=24.77 00:19:07.767 lat (msec): min=3, max=216, avg=105.27, stdev=25.00 00:19:07.767 clat percentiles (msec): 00:19:07.767 | 1.00th=[ 42], 5.00th=[ 74], 10.00th=[ 77], 20.00th=[ 80], 00:19:07.767 | 30.00th=[ 103], 40.00th=[ 107], 50.00th=[ 110], 60.00th=[ 112], 00:19:07.767 | 70.00th=[ 113], 80.00th=[ 113], 90.00th=[ 114], 95.00th=[ 117], 00:19:07.767 | 99.00th=[ 201], 99.50th=[ 205], 99.90th=[ 209], 99.95th=[ 211], 00:19:07.767 | 99.99th=[ 218] 00:19:07.767 bw ( KiB/s): min=89421, max=210944, per=9.13%, avg=155664.65, stdev=28420.02, samples=20 00:19:07.767 iops : min= 349, max= 824, avg=608.05, stdev=111.05, samples=20 00:19:07.767 lat (msec) : 4=0.05%, 50=1.48%, 100=27.12%, 250=71.35% 00:19:07.767 cpu : usr=1.10%, sys=1.56%, ctx=3542, majf=0, minf=1 00:19:07.767 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:07.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.767 issued rwts: total=0,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.767 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.767 job7: (groupid=0, jobs=1): err= 0: pid=91423: Sat Dec 7 08:10:17 2024 00:19:07.767 write: IOPS=404, BW=101MiB/s (106MB/s)(1025MiB/10129msec); 0 zone resets 00:19:07.767 slat (usec): min=19, max=32229, avg=2432.05, stdev=4225.93 00:19:07.767 clat (msec): min=23, max=280, avg=155.56, stdev=20.53 00:19:07.767 lat (msec): min=23, max=280, avg=157.99, stdev=20.41 00:19:07.767 clat percentiles (msec): 00:19:07.767 | 1.00th=[ 87], 5.00th=[ 140], 10.00th=[ 142], 20.00th=[ 146], 00:19:07.767 | 30.00th=[ 150], 40.00th=[ 150], 50.00th=[ 150], 60.00th=[ 150], 00:19:07.767 | 70.00th=[ 153], 80.00th=[ 174], 90.00th=[ 186], 95.00th=[ 192], 00:19:07.767 | 99.00th=[ 199], 99.50th=[ 232], 99.90th=[ 271], 99.95th=[ 271], 00:19:07.767 | 99.99th=[ 279] 00:19:07.767 bw ( KiB/s): min=86016, max=110592, per=6.06%, avg=103364.00, stdev=8827.92, samples=20 00:19:07.767 iops : min= 336, max= 432, avg=403.75, stdev=34.51, samples=20 00:19:07.767 lat (msec) : 50=0.59%, 100=0.59%, 250=98.51%, 500=0.32% 00:19:07.767 cpu : usr=0.89%, sys=1.32%, ctx=4136, majf=0, minf=1 00:19:07.767 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:19:07.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.767 issued rwts: total=0,4101,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.767 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.767 job8: (groupid=0, jobs=1): err= 0: pid=91424: Sat Dec 7 08:10:17 2024 00:19:07.767 write: IOPS=683, BW=171MiB/s (179MB/s)(1726MiB/10107msec); 0 zone resets 00:19:07.767 slat (usec): min=19, max=12002, avg=1443.70, stdev=2579.91 00:19:07.767 clat (msec): min=3, max=217, avg=92.21, stdev=27.26 00:19:07.767 lat (msec): min=3, max=217, avg=93.66, stdev=27.55 00:19:07.767 clat percentiles (msec): 00:19:07.767 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 74], 00:19:07.767 | 30.00th=[ 80], 40.00th=[ 101], 50.00th=[ 107], 60.00th=[ 111], 00:19:07.767 | 70.00th=[ 112], 80.00th=[ 113], 90.00th=[ 114], 95.00th=[ 115], 00:19:07.767 | 99.00th=[ 120], 99.50th=[ 155], 99.90th=[ 203], 99.95th=[ 211], 00:19:07.767 | 99.99th=[ 218] 00:19:07.767 bw ( KiB/s): min=141312, max=390144, per=10.27%, avg=175129.60, stdev=61657.10, samples=20 00:19:07.767 iops : min= 552, max= 1524, avg=684.10, stdev=240.85, samples=20 00:19:07.767 lat (msec) : 4=0.04%, 10=0.06%, 20=0.29%, 50=14.72%, 100=24.88% 00:19:07.767 lat (msec) : 250=60.01% 00:19:07.767 cpu : usr=1.33%, sys=1.76%, ctx=5586, majf=0, minf=1 00:19:07.767 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:07.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.767 issued rwts: total=0,6904,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.767 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.767 job9: (groupid=0, jobs=1): err= 0: pid=91425: Sat Dec 7 08:10:17 2024 00:19:07.767 write: IOPS=834, BW=209MiB/s (219MB/s)(2100MiB/10068msec); 0 zone resets 00:19:07.767 slat (usec): min=18, max=6823, avg=1185.91, stdev=1994.93 00:19:07.767 clat (msec): min=7, max=143, avg=75.47, stdev= 5.31 00:19:07.767 lat (msec): min=7, max=143, avg=76.65, stdev= 5.03 00:19:07.767 clat percentiles (msec): 00:19:07.767 | 1.00th=[ 70], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 73], 00:19:07.767 | 30.00th=[ 75], 40.00th=[ 75], 50.00th=[ 77], 60.00th=[ 77], 00:19:07.767 | 70.00th=[ 78], 80.00th=[ 79], 90.00th=[ 80], 95.00th=[ 80], 00:19:07.767 | 99.00th=[ 81], 99.50th=[ 90], 99.90th=[ 134], 99.95th=[ 138], 00:19:07.767 | 99.99th=[ 144] 00:19:07.767 bw ( KiB/s): min=205312, max=218112, per=12.51%, avg=213431.95, stdev=4077.31, samples=20 00:19:07.767 iops : min= 802, max= 852, avg=833.70, stdev=15.95, samples=20 00:19:07.767 lat (msec) : 10=0.05%, 20=0.10%, 50=0.33%, 100=99.08%, 250=0.44% 00:19:07.767 cpu : usr=1.58%, sys=1.82%, ctx=9432, majf=0, minf=1 00:19:07.767 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:07.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.767 issued rwts: total=0,8401,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.767 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.767 job10: (groupid=0, jobs=1): err= 0: pid=91426: Sat Dec 7 08:10:17 2024 00:19:07.767 write: IOPS=797, BW=199MiB/s (209MB/s)(2009MiB/10068msec); 0 zone resets 00:19:07.767 slat (usec): min=21, max=24059, avg=1239.51, stdev=2106.93 00:19:07.767 clat (msec): min=30, max=142, avg=78.94, stdev= 8.98 00:19:07.767 lat (msec): min=30, max=143, avg=80.18, stdev= 8.90 00:19:07.767 clat percentiles (msec): 00:19:07.767 | 1.00th=[ 72], 5.00th=[ 73], 10.00th=[ 73], 20.00th=[ 74], 00:19:07.767 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 79], 60.00th=[ 79], 00:19:07.767 | 70.00th=[ 79], 80.00th=[ 80], 90.00th=[ 81], 95.00th=[ 106], 00:19:07.767 | 99.00th=[ 117], 99.50th=[ 117], 99.90th=[ 136], 99.95th=[ 138], 00:19:07.767 | 99.99th=[ 144] 00:19:07.767 bw ( KiB/s): min=131072, max=213504, per=11.96%, avg=204019.65, stdev=19146.48, samples=20 00:19:07.767 iops : min= 512, max= 834, avg=796.90, stdev=74.84, samples=20 00:19:07.767 lat (msec) : 50=0.10%, 100=94.50%, 250=5.40% 00:19:07.767 cpu : usr=1.65%, sys=2.20%, ctx=10082, majf=0, minf=1 00:19:07.767 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:07.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:07.767 issued rwts: total=0,8034,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.767 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.767 00:19:07.767 Run status group 0 (all jobs): 00:19:07.767 WRITE: bw=1666MiB/s (1747MB/s), 98.7MiB/s-209MiB/s (103MB/s-219MB/s), io=16.5GiB (17.7GB), run=10067-10129msec 00:19:07.767 00:19:07.767 Disk stats (read/write): 00:19:07.767 nvme0n1: ios=49/8385, merge=0/0, ticks=27/1209906, in_queue=1209933, util=97.56% 00:19:07.767 nvme10n1: ios=45/10151, merge=0/0, ticks=35/1210795, in_queue=1210830, util=97.63% 00:19:07.767 nvme1n1: ios=0/16604, merge=0/0, ticks=0/1212819, in_queue=1212819, util=97.78% 00:19:07.767 nvme2n1: ios=0/7822, merge=0/0, ticks=0/1207261, in_queue=1207261, util=97.81% 00:19:07.767 nvme3n1: ios=0/7906, merge=0/0, ticks=0/1208607, in_queue=1208607, util=97.93% 00:19:07.767 nvme4n1: ios=0/15893, merge=0/0, ticks=0/1211567, in_queue=1211567, util=98.14% 00:19:07.767 nvme5n1: ios=0/12122, merge=0/0, ticks=0/1212398, in_queue=1212398, util=98.35% 00:19:07.767 nvme6n1: ios=0/8047, merge=0/0, ticks=0/1209690, in_queue=1209690, util=98.38% 00:19:07.767 nvme7n1: ios=0/13651, merge=0/0, ticks=0/1211897, in_queue=1211897, util=98.73% 00:19:07.767 nvme8n1: ios=0/16614, merge=0/0, ticks=0/1212691, in_queue=1212691, util=98.76% 00:19:07.767 nvme9n1: ios=0/15861, merge=0/0, ticks=0/1211437, in_queue=1211437, util=98.74% 00:19:07.767 08:10:17 -- target/multiconnection.sh@36 -- # sync 00:19:07.767 08:10:17 -- target/multiconnection.sh@37 -- # seq 1 11 00:19:07.767 08:10:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.768 08:10:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:07.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:07.768 08:10:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:07.768 08:10:17 -- common/autotest_common.sh@1208 -- # local i=0 00:19:07.768 08:10:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:07.768 08:10:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:19:07.768 08:10:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:07.768 08:10:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:19:07.768 08:10:17 -- common/autotest_common.sh@1220 -- # return 0 00:19:07.768 08:10:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:07.768 08:10:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.768 08:10:17 -- common/autotest_common.sh@10 -- # set +x 00:19:07.768 08:10:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.768 08:10:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.768 08:10:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:07.768 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:07.768 08:10:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:07.768 08:10:17 -- common/autotest_common.sh@1208 -- # local i=0 00:19:07.768 08:10:17 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:07.768 08:10:17 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:19:07.768 08:10:17 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:07.768 08:10:17 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:19:07.768 08:10:17 -- common/autotest_common.sh@1220 -- # return 0 00:19:07.768 08:10:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:07.768 08:10:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.768 08:10:17 -- common/autotest_common.sh@10 -- # set +x 00:19:07.768 08:10:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.768 08:10:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.768 08:10:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:07.768 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:07.768 08:10:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:07.768 08:10:18 -- common/autotest_common.sh@1208 -- # local i=0 00:19:07.768 08:10:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:07.768 08:10:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:19:07.768 08:10:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:07.768 08:10:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:19:07.768 08:10:18 -- common/autotest_common.sh@1220 -- # return 0 00:19:07.768 08:10:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:07.768 08:10:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.768 08:10:18 -- common/autotest_common.sh@10 -- # set +x 00:19:07.768 08:10:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.768 08:10:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.768 08:10:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:07.768 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:07.768 08:10:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:07.768 08:10:18 -- common/autotest_common.sh@1208 -- # local i=0 00:19:07.768 08:10:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:07.768 08:10:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:19:07.768 08:10:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:07.768 08:10:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:19:07.768 08:10:18 -- common/autotest_common.sh@1220 -- # return 0 00:19:07.768 08:10:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:07.768 08:10:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.768 08:10:18 -- common/autotest_common.sh@10 -- # set +x 00:19:07.768 08:10:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.768 08:10:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.768 08:10:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:07.768 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:07.768 08:10:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:07.768 08:10:18 -- common/autotest_common.sh@1208 -- # local i=0 00:19:07.768 08:10:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:07.768 08:10:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:19:07.768 08:10:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:07.768 08:10:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:19:07.768 08:10:18 -- common/autotest_common.sh@1220 -- # return 0 00:19:07.768 08:10:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:07.768 08:10:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.768 08:10:18 -- common/autotest_common.sh@10 -- # set +x 00:19:07.768 08:10:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.768 08:10:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.768 08:10:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:07.768 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:07.768 08:10:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:07.768 08:10:18 -- common/autotest_common.sh@1208 -- # local i=0 00:19:07.768 08:10:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:07.768 08:10:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:19:07.768 08:10:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:07.768 08:10:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:19:07.768 08:10:18 -- common/autotest_common.sh@1220 -- # return 0 00:19:07.768 08:10:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:07.768 08:10:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.768 08:10:18 -- common/autotest_common.sh@10 -- # set +x 00:19:07.768 08:10:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.768 08:10:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.768 08:10:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:07.768 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:07.768 08:10:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:07.768 08:10:18 -- common/autotest_common.sh@1208 -- # local i=0 00:19:07.768 08:10:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:07.768 08:10:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:19:07.768 08:10:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:07.768 08:10:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:19:07.768 08:10:18 -- common/autotest_common.sh@1220 -- # return 0 00:19:07.768 08:10:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:07.768 08:10:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.768 08:10:18 -- common/autotest_common.sh@10 -- # set +x 00:19:07.768 08:10:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.768 08:10:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.768 08:10:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:07.768 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:07.768 08:10:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:07.768 08:10:18 -- common/autotest_common.sh@1208 -- # local i=0 00:19:07.768 08:10:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:07.768 08:10:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:19:07.768 08:10:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:07.768 08:10:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:19:07.768 08:10:18 -- common/autotest_common.sh@1220 -- # return 0 00:19:07.768 08:10:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:07.768 08:10:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.768 08:10:18 -- common/autotest_common.sh@10 -- # set +x 00:19:07.768 08:10:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.768 08:10:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.768 08:10:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:07.768 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:07.768 08:10:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:07.768 08:10:18 -- common/autotest_common.sh@1208 -- # local i=0 00:19:07.768 08:10:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:07.768 08:10:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:19:07.768 08:10:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:07.768 08:10:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:19:07.768 08:10:18 -- common/autotest_common.sh@1220 -- # return 0 00:19:07.768 08:10:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:07.768 08:10:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.768 08:10:18 -- common/autotest_common.sh@10 -- # set +x 00:19:07.768 08:10:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.768 08:10:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.768 08:10:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:07.768 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:07.768 08:10:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:07.768 08:10:18 -- common/autotest_common.sh@1208 -- # local i=0 00:19:07.768 08:10:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:07.768 08:10:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:19:07.768 08:10:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:07.768 08:10:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:19:07.768 08:10:18 -- common/autotest_common.sh@1220 -- # return 0 00:19:07.768 08:10:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:07.768 08:10:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.768 08:10:18 -- common/autotest_common.sh@10 -- # set +x 00:19:07.768 08:10:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.768 08:10:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.768 08:10:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:07.768 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:07.768 08:10:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:07.768 08:10:18 -- common/autotest_common.sh@1208 -- # local i=0 00:19:07.768 08:10:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:07.768 08:10:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:19:07.769 08:10:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:19:07.769 08:10:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:07.769 08:10:18 -- common/autotest_common.sh@1220 -- # return 0 00:19:07.769 08:10:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:07.769 08:10:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.769 08:10:18 -- common/autotest_common.sh@10 -- # set +x 00:19:07.769 08:10:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.769 08:10:18 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:07.769 08:10:18 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:07.769 08:10:18 -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:07.769 08:10:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:07.769 08:10:18 -- nvmf/common.sh@116 -- # sync 00:19:07.769 08:10:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:07.769 08:10:18 -- nvmf/common.sh@119 -- # set +e 00:19:07.769 08:10:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:07.769 08:10:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:07.769 rmmod nvme_tcp 00:19:07.769 rmmod nvme_fabrics 00:19:07.769 rmmod nvme_keyring 00:19:07.769 08:10:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:07.769 08:10:18 -- nvmf/common.sh@123 -- # set -e 00:19:07.769 08:10:18 -- nvmf/common.sh@124 -- # return 0 00:19:07.769 08:10:18 -- nvmf/common.sh@477 -- # '[' -n 90711 ']' 00:19:07.769 08:10:18 -- nvmf/common.sh@478 -- # killprocess 90711 00:19:07.769 08:10:18 -- common/autotest_common.sh@936 -- # '[' -z 90711 ']' 00:19:07.769 08:10:18 -- common/autotest_common.sh@940 -- # kill -0 90711 00:19:07.769 08:10:18 -- common/autotest_common.sh@941 -- # uname 00:19:07.769 08:10:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:07.769 08:10:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90711 00:19:07.769 killing process with pid 90711 00:19:07.769 08:10:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:07.769 08:10:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:07.769 08:10:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90711' 00:19:07.769 08:10:18 -- common/autotest_common.sh@955 -- # kill 90711 00:19:07.769 08:10:18 -- common/autotest_common.sh@960 -- # wait 90711 00:19:08.337 08:10:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:08.337 08:10:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:08.337 08:10:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:08.337 08:10:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:08.337 08:10:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:08.337 08:10:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.337 08:10:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.337 08:10:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.337 08:10:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:08.337 ************************************ 00:19:08.337 END TEST nvmf_multiconnection 00:19:08.337 ************************************ 00:19:08.337 00:19:08.337 real 0m49.634s 00:19:08.337 user 2m45.133s 00:19:08.337 sys 0m26.535s 00:19:08.337 08:10:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:08.337 08:10:19 -- common/autotest_common.sh@10 -- # set +x 00:19:08.337 08:10:19 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:08.337 08:10:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:08.337 08:10:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:08.337 08:10:19 -- common/autotest_common.sh@10 -- # set +x 00:19:08.337 ************************************ 00:19:08.337 START TEST nvmf_initiator_timeout 00:19:08.337 ************************************ 00:19:08.337 08:10:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:08.337 * Looking for test storage... 00:19:08.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:08.337 08:10:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:08.337 08:10:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:08.337 08:10:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:08.596 08:10:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:08.596 08:10:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:08.596 08:10:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:08.596 08:10:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:08.596 08:10:19 -- scripts/common.sh@335 -- # IFS=.-: 00:19:08.596 08:10:19 -- scripts/common.sh@335 -- # read -ra ver1 00:19:08.596 08:10:19 -- scripts/common.sh@336 -- # IFS=.-: 00:19:08.596 08:10:19 -- scripts/common.sh@336 -- # read -ra ver2 00:19:08.596 08:10:19 -- scripts/common.sh@337 -- # local 'op=<' 00:19:08.596 08:10:19 -- scripts/common.sh@339 -- # ver1_l=2 00:19:08.596 08:10:19 -- scripts/common.sh@340 -- # ver2_l=1 00:19:08.596 08:10:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:08.596 08:10:19 -- scripts/common.sh@343 -- # case "$op" in 00:19:08.596 08:10:19 -- scripts/common.sh@344 -- # : 1 00:19:08.596 08:10:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:08.596 08:10:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:08.596 08:10:19 -- scripts/common.sh@364 -- # decimal 1 00:19:08.596 08:10:19 -- scripts/common.sh@352 -- # local d=1 00:19:08.596 08:10:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:08.596 08:10:19 -- scripts/common.sh@354 -- # echo 1 00:19:08.596 08:10:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:08.596 08:10:19 -- scripts/common.sh@365 -- # decimal 2 00:19:08.596 08:10:19 -- scripts/common.sh@352 -- # local d=2 00:19:08.596 08:10:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:08.596 08:10:19 -- scripts/common.sh@354 -- # echo 2 00:19:08.596 08:10:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:08.596 08:10:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:08.596 08:10:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:08.596 08:10:19 -- scripts/common.sh@367 -- # return 0 00:19:08.596 08:10:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:08.596 08:10:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:08.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.596 --rc genhtml_branch_coverage=1 00:19:08.596 --rc genhtml_function_coverage=1 00:19:08.596 --rc genhtml_legend=1 00:19:08.596 --rc geninfo_all_blocks=1 00:19:08.596 --rc geninfo_unexecuted_blocks=1 00:19:08.596 00:19:08.596 ' 00:19:08.596 08:10:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:08.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.596 --rc genhtml_branch_coverage=1 00:19:08.596 --rc genhtml_function_coverage=1 00:19:08.596 --rc genhtml_legend=1 00:19:08.596 --rc geninfo_all_blocks=1 00:19:08.596 --rc geninfo_unexecuted_blocks=1 00:19:08.596 00:19:08.596 ' 00:19:08.596 08:10:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:08.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.596 --rc genhtml_branch_coverage=1 00:19:08.596 --rc genhtml_function_coverage=1 00:19:08.596 --rc genhtml_legend=1 00:19:08.596 --rc geninfo_all_blocks=1 00:19:08.597 --rc geninfo_unexecuted_blocks=1 00:19:08.597 00:19:08.597 ' 00:19:08.597 08:10:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:08.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.597 --rc genhtml_branch_coverage=1 00:19:08.597 --rc genhtml_function_coverage=1 00:19:08.597 --rc genhtml_legend=1 00:19:08.597 --rc geninfo_all_blocks=1 00:19:08.597 --rc geninfo_unexecuted_blocks=1 00:19:08.597 00:19:08.597 ' 00:19:08.597 08:10:19 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:08.597 08:10:19 -- nvmf/common.sh@7 -- # uname -s 00:19:08.597 08:10:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.597 08:10:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.597 08:10:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.597 08:10:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.597 08:10:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.597 08:10:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.597 08:10:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.597 08:10:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.597 08:10:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.597 08:10:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.597 08:10:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:19:08.597 08:10:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:19:08.597 08:10:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.597 08:10:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.597 08:10:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:08.597 08:10:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:08.597 08:10:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.597 08:10:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.597 08:10:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.597 08:10:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.597 08:10:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.597 08:10:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.597 08:10:19 -- paths/export.sh@5 -- # export PATH 00:19:08.597 08:10:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.597 08:10:19 -- nvmf/common.sh@46 -- # : 0 00:19:08.597 08:10:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:08.597 08:10:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:08.597 08:10:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:08.597 08:10:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.597 08:10:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.597 08:10:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:08.597 08:10:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:08.597 08:10:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:08.597 08:10:19 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:08.597 08:10:19 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:08.597 08:10:19 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:08.597 08:10:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:08.597 08:10:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.597 08:10:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:08.597 08:10:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:08.597 08:10:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:08.597 08:10:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.597 08:10:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.597 08:10:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.597 08:10:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:08.597 08:10:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:08.597 08:10:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:08.597 08:10:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:08.597 08:10:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:08.597 08:10:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:08.597 08:10:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.597 08:10:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:08.597 08:10:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:08.597 08:10:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:08.597 08:10:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:08.597 08:10:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:08.597 08:10:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:08.597 08:10:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.597 08:10:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:08.597 08:10:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:08.597 08:10:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:08.597 08:10:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:08.597 08:10:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:08.597 08:10:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:08.597 Cannot find device "nvmf_tgt_br" 00:19:08.597 08:10:19 -- nvmf/common.sh@154 -- # true 00:19:08.597 08:10:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:08.597 Cannot find device "nvmf_tgt_br2" 00:19:08.597 08:10:19 -- nvmf/common.sh@155 -- # true 00:19:08.597 08:10:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:08.597 08:10:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:08.597 Cannot find device "nvmf_tgt_br" 00:19:08.597 08:10:19 -- nvmf/common.sh@157 -- # true 00:19:08.597 08:10:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:08.597 Cannot find device "nvmf_tgt_br2" 00:19:08.597 08:10:19 -- nvmf/common.sh@158 -- # true 00:19:08.597 08:10:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:08.597 08:10:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:08.597 08:10:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:08.597 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:08.597 08:10:19 -- nvmf/common.sh@161 -- # true 00:19:08.597 08:10:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:08.597 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:08.597 08:10:19 -- nvmf/common.sh@162 -- # true 00:19:08.597 08:10:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:08.597 08:10:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:08.597 08:10:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:08.597 08:10:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:08.597 08:10:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:08.597 08:10:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:08.597 08:10:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:08.597 08:10:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:08.597 08:10:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:08.597 08:10:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:08.597 08:10:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:08.857 08:10:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:08.857 08:10:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:08.857 08:10:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:08.857 08:10:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:08.857 08:10:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:08.857 08:10:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:08.857 08:10:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:08.857 08:10:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:08.857 08:10:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:08.857 08:10:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:08.857 08:10:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:08.857 08:10:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:08.857 08:10:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:08.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:19:08.857 00:19:08.857 --- 10.0.0.2 ping statistics --- 00:19:08.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.857 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:19:08.857 08:10:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:08.857 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:08.857 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:19:08.857 00:19:08.857 --- 10.0.0.3 ping statistics --- 00:19:08.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.857 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:08.857 08:10:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:08.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:08.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:19:08.857 00:19:08.857 --- 10.0.0.1 ping statistics --- 00:19:08.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.857 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:19:08.857 08:10:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:08.857 08:10:19 -- nvmf/common.sh@421 -- # return 0 00:19:08.857 08:10:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:08.857 08:10:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:08.857 08:10:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:08.857 08:10:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:08.857 08:10:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:08.857 08:10:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:08.857 08:10:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:08.857 08:10:19 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:08.857 08:10:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:08.857 08:10:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:08.857 08:10:19 -- common/autotest_common.sh@10 -- # set +x 00:19:08.857 08:10:19 -- nvmf/common.sh@469 -- # nvmfpid=91792 00:19:08.857 08:10:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:08.857 08:10:19 -- nvmf/common.sh@470 -- # waitforlisten 91792 00:19:08.857 08:10:19 -- common/autotest_common.sh@829 -- # '[' -z 91792 ']' 00:19:08.857 08:10:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.857 08:10:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:08.857 08:10:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.857 08:10:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:08.857 08:10:19 -- common/autotest_common.sh@10 -- # set +x 00:19:08.857 [2024-12-07 08:10:20.049251] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:08.857 [2024-12-07 08:10:20.049341] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.117 [2024-12-07 08:10:20.185500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:09.117 [2024-12-07 08:10:20.261075] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:09.117 [2024-12-07 08:10:20.261224] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.117 [2024-12-07 08:10:20.261254] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.117 [2024-12-07 08:10:20.261262] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.117 [2024-12-07 08:10:20.261356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.117 [2024-12-07 08:10:20.261467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.117 [2024-12-07 08:10:20.262135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:09.117 [2024-12-07 08:10:20.262169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.056 08:10:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:10.056 08:10:21 -- common/autotest_common.sh@862 -- # return 0 00:19:10.056 08:10:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:10.056 08:10:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:10.056 08:10:21 -- common/autotest_common.sh@10 -- # set +x 00:19:10.056 08:10:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.056 08:10:21 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:10.056 08:10:21 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:10.056 08:10:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.056 08:10:21 -- common/autotest_common.sh@10 -- # set +x 00:19:10.056 Malloc0 00:19:10.056 08:10:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.056 08:10:21 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:10.056 08:10:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.056 08:10:21 -- common/autotest_common.sh@10 -- # set +x 00:19:10.056 Delay0 00:19:10.056 08:10:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.056 08:10:21 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:10.056 08:10:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.056 08:10:21 -- common/autotest_common.sh@10 -- # set +x 00:19:10.056 [2024-12-07 08:10:21.167530] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.056 08:10:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.056 08:10:21 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:10.056 08:10:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.056 08:10:21 -- common/autotest_common.sh@10 -- # set +x 00:19:10.056 08:10:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.056 08:10:21 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:10.056 08:10:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.056 08:10:21 -- common/autotest_common.sh@10 -- # set +x 00:19:10.056 08:10:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.056 08:10:21 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:10.056 08:10:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.056 08:10:21 -- common/autotest_common.sh@10 -- # set +x 00:19:10.056 [2024-12-07 08:10:21.195715] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.056 08:10:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.056 08:10:21 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:10.315 08:10:21 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:10.315 08:10:21 -- common/autotest_common.sh@1187 -- # local i=0 00:19:10.315 08:10:21 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:10.315 08:10:21 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:10.315 08:10:21 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:12.217 08:10:23 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:12.217 08:10:23 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:12.217 08:10:23 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:12.217 08:10:23 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:12.217 08:10:23 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:12.217 08:10:23 -- common/autotest_common.sh@1197 -- # return 0 00:19:12.217 08:10:23 -- target/initiator_timeout.sh@35 -- # fio_pid=91874 00:19:12.217 08:10:23 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:12.217 08:10:23 -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:12.217 [global] 00:19:12.217 thread=1 00:19:12.217 invalidate=1 00:19:12.217 rw=write 00:19:12.217 time_based=1 00:19:12.217 runtime=60 00:19:12.217 ioengine=libaio 00:19:12.217 direct=1 00:19:12.217 bs=4096 00:19:12.217 iodepth=1 00:19:12.217 norandommap=0 00:19:12.217 numjobs=1 00:19:12.217 00:19:12.217 verify_dump=1 00:19:12.217 verify_backlog=512 00:19:12.217 verify_state_save=0 00:19:12.217 do_verify=1 00:19:12.217 verify=crc32c-intel 00:19:12.217 [job0] 00:19:12.217 filename=/dev/nvme0n1 00:19:12.217 Could not set queue depth (nvme0n1) 00:19:12.476 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:12.476 fio-3.35 00:19:12.476 Starting 1 thread 00:19:15.765 08:10:26 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:15.765 08:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.765 08:10:26 -- common/autotest_common.sh@10 -- # set +x 00:19:15.765 true 00:19:15.765 08:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.765 08:10:26 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:15.765 08:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.765 08:10:26 -- common/autotest_common.sh@10 -- # set +x 00:19:15.765 true 00:19:15.765 08:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.765 08:10:26 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:15.765 08:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.765 08:10:26 -- common/autotest_common.sh@10 -- # set +x 00:19:15.765 true 00:19:15.765 08:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.765 08:10:26 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:15.765 08:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.765 08:10:26 -- common/autotest_common.sh@10 -- # set +x 00:19:15.765 true 00:19:15.765 08:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.765 08:10:26 -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:18.298 08:10:29 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:18.298 08:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.298 08:10:29 -- common/autotest_common.sh@10 -- # set +x 00:19:18.298 true 00:19:18.298 08:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.298 08:10:29 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:18.298 08:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.298 08:10:29 -- common/autotest_common.sh@10 -- # set +x 00:19:18.298 true 00:19:18.298 08:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.298 08:10:29 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:18.298 08:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.298 08:10:29 -- common/autotest_common.sh@10 -- # set +x 00:19:18.298 true 00:19:18.298 08:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.298 08:10:29 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:18.298 08:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.298 08:10:29 -- common/autotest_common.sh@10 -- # set +x 00:19:18.298 true 00:19:18.298 08:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.298 08:10:29 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:18.298 08:10:29 -- target/initiator_timeout.sh@54 -- # wait 91874 00:20:14.563 00:20:14.563 job0: (groupid=0, jobs=1): err= 0: pid=91901: Sat Dec 7 08:11:23 2024 00:20:14.563 read: IOPS=923, BW=3694KiB/s (3783kB/s)(216MiB/60000msec) 00:20:14.563 slat (usec): min=10, max=13374, avg=13.10, stdev=65.77 00:20:14.563 clat (usec): min=151, max=40498k, avg=906.88, stdev=172037.32 00:20:14.563 lat (usec): min=163, max=40498k, avg=919.98, stdev=172037.33 00:20:14.563 clat percentiles (usec): 00:20:14.563 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 165], 00:20:14.563 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:20:14.563 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 196], 95.00th=[ 204], 00:20:14.563 | 99.00th=[ 221], 99.50th=[ 227], 99.90th=[ 251], 99.95th=[ 289], 00:20:14.563 | 99.99th=[ 594] 00:20:14.563 write: IOPS=930, BW=3721KiB/s (3810kB/s)(218MiB/60000msec); 0 zone resets 00:20:14.563 slat (usec): min=16, max=582, avg=18.95, stdev= 5.81 00:20:14.563 clat (usec): min=118, max=2217, avg=139.87, stdev=15.99 00:20:14.563 lat (usec): min=135, max=2240, avg=158.82, stdev=17.43 00:20:14.563 clat percentiles (usec): 00:20:14.563 | 1.00th=[ 124], 5.00th=[ 127], 10.00th=[ 129], 20.00th=[ 131], 00:20:14.563 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:20:14.563 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 157], 95.00th=[ 165], 00:20:14.563 | 99.00th=[ 182], 99.50th=[ 190], 99.90th=[ 223], 99.95th=[ 265], 00:20:14.563 | 99.99th=[ 515] 00:20:14.563 bw ( KiB/s): min= 4256, max=12288, per=100.00%, avg=11168.82, stdev=1639.06, samples=39 00:20:14.563 iops : min= 1064, max= 3072, avg=2792.21, stdev=409.77, samples=39 00:20:14.563 lat (usec) : 250=99.92%, 500=0.07%, 750=0.01% 00:20:14.563 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:20:14.563 cpu : usr=0.51%, sys=2.22%, ctx=111272, majf=0, minf=5 00:20:14.563 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:14.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.563 issued rwts: total=55415,55808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:14.563 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:14.563 00:20:14.563 Run status group 0 (all jobs): 00:20:14.563 READ: bw=3694KiB/s (3783kB/s), 3694KiB/s-3694KiB/s (3783kB/s-3783kB/s), io=216MiB (227MB), run=60000-60000msec 00:20:14.563 WRITE: bw=3721KiB/s (3810kB/s), 3721KiB/s-3721KiB/s (3810kB/s-3810kB/s), io=218MiB (229MB), run=60000-60000msec 00:20:14.563 00:20:14.563 Disk stats (read/write): 00:20:14.563 nvme0n1: ios=55603/55326, merge=0/0, ticks=10132/8381, in_queue=18513, util=99.74% 00:20:14.563 08:11:23 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:14.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:14.563 08:11:23 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:14.563 08:11:23 -- common/autotest_common.sh@1208 -- # local i=0 00:20:14.563 08:11:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:14.563 08:11:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:14.563 08:11:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:14.563 08:11:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:14.563 08:11:23 -- common/autotest_common.sh@1220 -- # return 0 00:20:14.563 08:11:23 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:14.563 nvmf hotplug test: fio successful as expected 00:20:14.563 08:11:23 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:14.563 08:11:23 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:14.563 08:11:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.563 08:11:23 -- common/autotest_common.sh@10 -- # set +x 00:20:14.563 08:11:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.563 08:11:23 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:14.563 08:11:23 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:14.563 08:11:23 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:14.563 08:11:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:14.563 08:11:23 -- nvmf/common.sh@116 -- # sync 00:20:14.563 08:11:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:14.563 08:11:23 -- nvmf/common.sh@119 -- # set +e 00:20:14.563 08:11:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:14.563 08:11:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:14.563 rmmod nvme_tcp 00:20:14.563 rmmod nvme_fabrics 00:20:14.563 rmmod nvme_keyring 00:20:14.563 08:11:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:14.563 08:11:23 -- nvmf/common.sh@123 -- # set -e 00:20:14.563 08:11:23 -- nvmf/common.sh@124 -- # return 0 00:20:14.563 08:11:23 -- nvmf/common.sh@477 -- # '[' -n 91792 ']' 00:20:14.563 08:11:23 -- nvmf/common.sh@478 -- # killprocess 91792 00:20:14.563 08:11:23 -- common/autotest_common.sh@936 -- # '[' -z 91792 ']' 00:20:14.563 08:11:23 -- common/autotest_common.sh@940 -- # kill -0 91792 00:20:14.563 08:11:23 -- common/autotest_common.sh@941 -- # uname 00:20:14.563 08:11:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:14.563 08:11:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91792 00:20:14.563 08:11:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:14.563 08:11:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:14.563 killing process with pid 91792 00:20:14.563 08:11:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91792' 00:20:14.563 08:11:23 -- common/autotest_common.sh@955 -- # kill 91792 00:20:14.563 08:11:23 -- common/autotest_common.sh@960 -- # wait 91792 00:20:14.563 08:11:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:14.563 08:11:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:14.563 08:11:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:14.564 08:11:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:14.564 08:11:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:14.564 08:11:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.564 08:11:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:14.564 08:11:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.564 08:11:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:14.564 00:20:14.564 real 1m4.668s 00:20:14.564 user 4m5.198s 00:20:14.564 sys 0m10.031s 00:20:14.564 08:11:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:14.564 08:11:24 -- common/autotest_common.sh@10 -- # set +x 00:20:14.564 ************************************ 00:20:14.564 END TEST nvmf_initiator_timeout 00:20:14.564 ************************************ 00:20:14.564 08:11:24 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:20:14.564 08:11:24 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:14.564 08:11:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:14.564 08:11:24 -- common/autotest_common.sh@10 -- # set +x 00:20:14.564 08:11:24 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:14.564 08:11:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:14.564 08:11:24 -- common/autotest_common.sh@10 -- # set +x 00:20:14.564 08:11:24 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:14.564 08:11:24 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:14.564 08:11:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:14.564 08:11:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:14.564 08:11:24 -- common/autotest_common.sh@10 -- # set +x 00:20:14.564 ************************************ 00:20:14.564 START TEST nvmf_multicontroller 00:20:14.564 ************************************ 00:20:14.564 08:11:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:14.564 * Looking for test storage... 00:20:14.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:14.564 08:11:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:14.564 08:11:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:14.564 08:11:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:14.564 08:11:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:14.564 08:11:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:14.564 08:11:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:14.564 08:11:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:14.564 08:11:24 -- scripts/common.sh@335 -- # IFS=.-: 00:20:14.564 08:11:24 -- scripts/common.sh@335 -- # read -ra ver1 00:20:14.564 08:11:24 -- scripts/common.sh@336 -- # IFS=.-: 00:20:14.564 08:11:24 -- scripts/common.sh@336 -- # read -ra ver2 00:20:14.564 08:11:24 -- scripts/common.sh@337 -- # local 'op=<' 00:20:14.564 08:11:24 -- scripts/common.sh@339 -- # ver1_l=2 00:20:14.564 08:11:24 -- scripts/common.sh@340 -- # ver2_l=1 00:20:14.564 08:11:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:14.564 08:11:24 -- scripts/common.sh@343 -- # case "$op" in 00:20:14.564 08:11:24 -- scripts/common.sh@344 -- # : 1 00:20:14.564 08:11:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:14.564 08:11:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:14.564 08:11:24 -- scripts/common.sh@364 -- # decimal 1 00:20:14.564 08:11:24 -- scripts/common.sh@352 -- # local d=1 00:20:14.564 08:11:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:14.564 08:11:24 -- scripts/common.sh@354 -- # echo 1 00:20:14.564 08:11:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:14.564 08:11:24 -- scripts/common.sh@365 -- # decimal 2 00:20:14.564 08:11:24 -- scripts/common.sh@352 -- # local d=2 00:20:14.564 08:11:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:14.564 08:11:24 -- scripts/common.sh@354 -- # echo 2 00:20:14.564 08:11:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:14.564 08:11:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:14.564 08:11:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:14.564 08:11:24 -- scripts/common.sh@367 -- # return 0 00:20:14.564 08:11:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:14.564 08:11:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:14.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.564 --rc genhtml_branch_coverage=1 00:20:14.564 --rc genhtml_function_coverage=1 00:20:14.564 --rc genhtml_legend=1 00:20:14.564 --rc geninfo_all_blocks=1 00:20:14.564 --rc geninfo_unexecuted_blocks=1 00:20:14.564 00:20:14.564 ' 00:20:14.564 08:11:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:14.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.564 --rc genhtml_branch_coverage=1 00:20:14.564 --rc genhtml_function_coverage=1 00:20:14.564 --rc genhtml_legend=1 00:20:14.564 --rc geninfo_all_blocks=1 00:20:14.564 --rc geninfo_unexecuted_blocks=1 00:20:14.564 00:20:14.564 ' 00:20:14.564 08:11:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:14.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.564 --rc genhtml_branch_coverage=1 00:20:14.564 --rc genhtml_function_coverage=1 00:20:14.564 --rc genhtml_legend=1 00:20:14.564 --rc geninfo_all_blocks=1 00:20:14.564 --rc geninfo_unexecuted_blocks=1 00:20:14.564 00:20:14.564 ' 00:20:14.564 08:11:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:14.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.564 --rc genhtml_branch_coverage=1 00:20:14.564 --rc genhtml_function_coverage=1 00:20:14.564 --rc genhtml_legend=1 00:20:14.564 --rc geninfo_all_blocks=1 00:20:14.564 --rc geninfo_unexecuted_blocks=1 00:20:14.564 00:20:14.564 ' 00:20:14.564 08:11:24 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:14.564 08:11:24 -- nvmf/common.sh@7 -- # uname -s 00:20:14.564 08:11:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.564 08:11:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.564 08:11:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.564 08:11:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.564 08:11:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.564 08:11:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.564 08:11:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.564 08:11:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.564 08:11:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.564 08:11:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.564 08:11:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:20:14.564 08:11:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:20:14.564 08:11:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.564 08:11:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.564 08:11:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:14.564 08:11:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:14.564 08:11:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.564 08:11:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.564 08:11:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.564 08:11:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.564 08:11:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.564 08:11:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.564 08:11:24 -- paths/export.sh@5 -- # export PATH 00:20:14.564 08:11:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.564 08:11:24 -- nvmf/common.sh@46 -- # : 0 00:20:14.564 08:11:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:14.564 08:11:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:14.564 08:11:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:14.564 08:11:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.564 08:11:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.564 08:11:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:14.564 08:11:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:14.564 08:11:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:14.564 08:11:24 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:14.564 08:11:24 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:14.564 08:11:24 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:14.564 08:11:24 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:14.564 08:11:24 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:14.564 08:11:24 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:14.564 08:11:24 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:14.564 08:11:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:14.564 08:11:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.564 08:11:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:14.564 08:11:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:14.564 08:11:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:14.564 08:11:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.565 08:11:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:14.565 08:11:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.565 08:11:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:14.565 08:11:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:14.565 08:11:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:14.565 08:11:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:14.565 08:11:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:14.565 08:11:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:14.565 08:11:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.565 08:11:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.565 08:11:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:14.565 08:11:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:14.565 08:11:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:14.565 08:11:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:14.565 08:11:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:14.565 08:11:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.565 08:11:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:14.565 08:11:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:14.565 08:11:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:14.565 08:11:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:14.565 08:11:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:14.565 08:11:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:14.565 Cannot find device "nvmf_tgt_br" 00:20:14.565 08:11:24 -- nvmf/common.sh@154 -- # true 00:20:14.565 08:11:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:14.565 Cannot find device "nvmf_tgt_br2" 00:20:14.565 08:11:24 -- nvmf/common.sh@155 -- # true 00:20:14.565 08:11:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:14.565 08:11:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:14.565 Cannot find device "nvmf_tgt_br" 00:20:14.565 08:11:24 -- nvmf/common.sh@157 -- # true 00:20:14.565 08:11:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:14.565 Cannot find device "nvmf_tgt_br2" 00:20:14.565 08:11:24 -- nvmf/common.sh@158 -- # true 00:20:14.565 08:11:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:14.565 08:11:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:14.565 08:11:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:14.565 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:14.565 08:11:24 -- nvmf/common.sh@161 -- # true 00:20:14.565 08:11:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:14.565 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:14.565 08:11:24 -- nvmf/common.sh@162 -- # true 00:20:14.565 08:11:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:14.565 08:11:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:14.565 08:11:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:14.565 08:11:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:14.565 08:11:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:14.565 08:11:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:14.565 08:11:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:14.565 08:11:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:14.565 08:11:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:14.565 08:11:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:14.565 08:11:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:14.565 08:11:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:14.565 08:11:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:14.565 08:11:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:14.565 08:11:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:14.565 08:11:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:14.565 08:11:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:14.565 08:11:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:14.565 08:11:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:14.565 08:11:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:14.565 08:11:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:14.565 08:11:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:14.565 08:11:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:14.565 08:11:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:14.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:20:14.565 00:20:14.565 --- 10.0.0.2 ping statistics --- 00:20:14.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.565 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:20:14.565 08:11:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:14.565 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:14.565 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:20:14.565 00:20:14.565 --- 10.0.0.3 ping statistics --- 00:20:14.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.565 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:14.565 08:11:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:14.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:14.565 00:20:14.565 --- 10.0.0.1 ping statistics --- 00:20:14.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.565 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:14.565 08:11:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.565 08:11:24 -- nvmf/common.sh@421 -- # return 0 00:20:14.565 08:11:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:14.565 08:11:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.565 08:11:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:14.565 08:11:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:14.565 08:11:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.565 08:11:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:14.565 08:11:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:14.565 08:11:24 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:14.565 08:11:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:14.565 08:11:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:14.565 08:11:24 -- common/autotest_common.sh@10 -- # set +x 00:20:14.565 08:11:24 -- nvmf/common.sh@469 -- # nvmfpid=92751 00:20:14.565 08:11:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:14.565 08:11:24 -- nvmf/common.sh@470 -- # waitforlisten 92751 00:20:14.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.565 08:11:24 -- common/autotest_common.sh@829 -- # '[' -z 92751 ']' 00:20:14.565 08:11:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.565 08:11:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.565 08:11:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.565 08:11:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.565 08:11:24 -- common/autotest_common.sh@10 -- # set +x 00:20:14.565 [2024-12-07 08:11:24.784722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:14.565 [2024-12-07 08:11:24.784819] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.565 [2024-12-07 08:11:24.926706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:14.565 [2024-12-07 08:11:24.997977] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:14.565 [2024-12-07 08:11:24.998125] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.565 [2024-12-07 08:11:24.998137] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.565 [2024-12-07 08:11:24.998145] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.565 [2024-12-07 08:11:24.998324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.565 [2024-12-07 08:11:24.998870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.565 [2024-12-07 08:11:24.998881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.565 08:11:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.565 08:11:25 -- common/autotest_common.sh@862 -- # return 0 00:20:14.565 08:11:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:14.565 08:11:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:14.565 08:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:14.565 08:11:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.565 08:11:25 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:14.565 08:11:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.565 08:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:14.565 [2024-12-07 08:11:25.824161] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.565 08:11:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.565 08:11:25 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:14.565 08:11:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.565 08:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:14.824 Malloc0 00:20:14.824 08:11:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.824 08:11:25 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:14.824 08:11:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.824 08:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:14.824 08:11:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.824 08:11:25 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:14.824 08:11:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.824 08:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:14.824 08:11:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.824 08:11:25 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:14.824 08:11:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.824 08:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:14.824 [2024-12-07 08:11:25.896660] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.824 08:11:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.824 08:11:25 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:14.824 08:11:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.824 08:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:14.824 [2024-12-07 08:11:25.904590] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:14.824 08:11:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.824 08:11:25 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:14.824 08:11:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.824 08:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:14.824 Malloc1 00:20:14.824 08:11:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.824 08:11:25 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:14.824 08:11:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.824 08:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:14.824 08:11:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.824 08:11:25 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:14.824 08:11:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.824 08:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:14.824 08:11:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.824 08:11:25 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:14.824 08:11:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.824 08:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:14.824 08:11:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.824 08:11:25 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:14.824 08:11:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.824 08:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:14.825 08:11:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.825 08:11:25 -- host/multicontroller.sh@44 -- # bdevperf_pid=92803 00:20:14.825 08:11:25 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:14.825 08:11:25 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:14.825 08:11:25 -- host/multicontroller.sh@47 -- # waitforlisten 92803 /var/tmp/bdevperf.sock 00:20:14.825 08:11:25 -- common/autotest_common.sh@829 -- # '[' -z 92803 ']' 00:20:14.825 08:11:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.825 08:11:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.825 08:11:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.825 08:11:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.825 08:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:15.758 08:11:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:15.758 08:11:26 -- common/autotest_common.sh@862 -- # return 0 00:20:15.758 08:11:26 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:15.758 08:11:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.758 08:11:26 -- common/autotest_common.sh@10 -- # set +x 00:20:16.016 NVMe0n1 00:20:16.016 08:11:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.016 08:11:27 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:16.016 08:11:27 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:16.016 08:11:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.016 08:11:27 -- common/autotest_common.sh@10 -- # set +x 00:20:16.016 08:11:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.016 1 00:20:16.016 08:11:27 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:16.016 08:11:27 -- common/autotest_common.sh@650 -- # local es=0 00:20:16.016 08:11:27 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:16.016 08:11:27 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:16.016 08:11:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.016 08:11:27 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:16.016 08:11:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.016 08:11:27 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:16.016 08:11:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.016 08:11:27 -- common/autotest_common.sh@10 -- # set +x 00:20:16.016 2024/12/07 08:11:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:16.016 request: 00:20:16.016 { 00:20:16.016 "method": "bdev_nvme_attach_controller", 00:20:16.016 "params": { 00:20:16.016 "name": "NVMe0", 00:20:16.016 "trtype": "tcp", 00:20:16.016 "traddr": "10.0.0.2", 00:20:16.016 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:16.017 "hostaddr": "10.0.0.2", 00:20:16.017 "hostsvcid": "60000", 00:20:16.017 "adrfam": "ipv4", 00:20:16.017 "trsvcid": "4420", 00:20:16.017 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:20:16.017 } 00:20:16.017 } 00:20:16.017 Got JSON-RPC error response 00:20:16.017 GoRPCClient: error on JSON-RPC call 00:20:16.017 08:11:27 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:16.017 08:11:27 -- common/autotest_common.sh@653 -- # es=1 00:20:16.017 08:11:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:16.017 08:11:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:16.017 08:11:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:16.017 08:11:27 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:16.017 08:11:27 -- common/autotest_common.sh@650 -- # local es=0 00:20:16.017 08:11:27 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:16.017 08:11:27 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:16.017 08:11:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.017 08:11:27 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:16.017 08:11:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.017 08:11:27 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:16.017 08:11:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.017 08:11:27 -- common/autotest_common.sh@10 -- # set +x 00:20:16.017 2024/12/07 08:11:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:16.017 request: 00:20:16.017 { 00:20:16.017 "method": "bdev_nvme_attach_controller", 00:20:16.017 "params": { 00:20:16.017 "name": "NVMe0", 00:20:16.017 "trtype": "tcp", 00:20:16.017 "traddr": "10.0.0.2", 00:20:16.017 "hostaddr": "10.0.0.2", 00:20:16.017 "hostsvcid": "60000", 00:20:16.017 "adrfam": "ipv4", 00:20:16.017 "trsvcid": "4420", 00:20:16.017 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:20:16.017 } 00:20:16.017 } 00:20:16.017 Got JSON-RPC error response 00:20:16.017 GoRPCClient: error on JSON-RPC call 00:20:16.017 08:11:27 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:16.017 08:11:27 -- common/autotest_common.sh@653 -- # es=1 00:20:16.017 08:11:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:16.017 08:11:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:16.017 08:11:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:16.017 08:11:27 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:16.017 08:11:27 -- common/autotest_common.sh@650 -- # local es=0 00:20:16.017 08:11:27 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:16.017 08:11:27 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:16.017 08:11:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.017 08:11:27 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:16.017 08:11:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.017 08:11:27 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:16.017 08:11:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.017 08:11:27 -- common/autotest_common.sh@10 -- # set +x 00:20:16.017 2024/12/07 08:11:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:20:16.017 request: 00:20:16.017 { 00:20:16.017 "method": "bdev_nvme_attach_controller", 00:20:16.017 "params": { 00:20:16.017 "name": "NVMe0", 00:20:16.017 "trtype": "tcp", 00:20:16.017 "traddr": "10.0.0.2", 00:20:16.017 "hostaddr": "10.0.0.2", 00:20:16.017 "hostsvcid": "60000", 00:20:16.017 "adrfam": "ipv4", 00:20:16.017 "trsvcid": "4420", 00:20:16.017 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.017 "multipath": "disable" 00:20:16.017 } 00:20:16.017 } 00:20:16.017 Got JSON-RPC error response 00:20:16.017 GoRPCClient: error on JSON-RPC call 00:20:16.017 08:11:27 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:16.017 08:11:27 -- common/autotest_common.sh@653 -- # es=1 00:20:16.017 08:11:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:16.017 08:11:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:16.017 08:11:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:16.017 08:11:27 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:16.017 08:11:27 -- common/autotest_common.sh@650 -- # local es=0 00:20:16.017 08:11:27 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:16.017 08:11:27 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:16.017 08:11:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.017 08:11:27 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:16.017 08:11:27 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.017 08:11:27 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:16.017 08:11:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.017 08:11:27 -- common/autotest_common.sh@10 -- # set +x 00:20:16.017 2024/12/07 08:11:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:16.017 request: 00:20:16.017 { 00:20:16.017 "method": "bdev_nvme_attach_controller", 00:20:16.017 "params": { 00:20:16.017 "name": "NVMe0", 00:20:16.017 "trtype": "tcp", 00:20:16.017 "traddr": "10.0.0.2", 00:20:16.017 "hostaddr": "10.0.0.2", 00:20:16.017 "hostsvcid": "60000", 00:20:16.017 "adrfam": "ipv4", 00:20:16.017 "trsvcid": "4420", 00:20:16.017 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.017 "multipath": "failover" 00:20:16.017 } 00:20:16.017 } 00:20:16.017 Got JSON-RPC error response 00:20:16.017 GoRPCClient: error on JSON-RPC call 00:20:16.017 08:11:27 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:16.017 08:11:27 -- common/autotest_common.sh@653 -- # es=1 00:20:16.017 08:11:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:16.017 08:11:27 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:16.017 08:11:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:16.018 08:11:27 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:16.018 08:11:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.018 08:11:27 -- common/autotest_common.sh@10 -- # set +x 00:20:16.018 00:20:16.018 08:11:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.018 08:11:27 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:16.018 08:11:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.018 08:11:27 -- common/autotest_common.sh@10 -- # set +x 00:20:16.018 08:11:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.018 08:11:27 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:16.018 08:11:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.018 08:11:27 -- common/autotest_common.sh@10 -- # set +x 00:20:16.276 00:20:16.276 08:11:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.276 08:11:27 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:16.276 08:11:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.276 08:11:27 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:16.276 08:11:27 -- common/autotest_common.sh@10 -- # set +x 00:20:16.276 08:11:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.276 08:11:27 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:16.276 08:11:27 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:17.208 0 00:20:17.466 08:11:28 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:17.466 08:11:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.466 08:11:28 -- common/autotest_common.sh@10 -- # set +x 00:20:17.466 08:11:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.466 08:11:28 -- host/multicontroller.sh@100 -- # killprocess 92803 00:20:17.466 08:11:28 -- common/autotest_common.sh@936 -- # '[' -z 92803 ']' 00:20:17.466 08:11:28 -- common/autotest_common.sh@940 -- # kill -0 92803 00:20:17.466 08:11:28 -- common/autotest_common.sh@941 -- # uname 00:20:17.466 08:11:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:17.466 08:11:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92803 00:20:17.466 killing process with pid 92803 00:20:17.466 08:11:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:17.466 08:11:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:17.466 08:11:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92803' 00:20:17.466 08:11:28 -- common/autotest_common.sh@955 -- # kill 92803 00:20:17.466 08:11:28 -- common/autotest_common.sh@960 -- # wait 92803 00:20:17.724 08:11:28 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:17.725 08:11:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.725 08:11:28 -- common/autotest_common.sh@10 -- # set +x 00:20:17.725 08:11:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.725 08:11:28 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:17.725 08:11:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.725 08:11:28 -- common/autotest_common.sh@10 -- # set +x 00:20:17.725 08:11:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.725 08:11:28 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:17.725 08:11:28 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:17.725 08:11:28 -- common/autotest_common.sh@1607 -- # read -r file 00:20:17.725 08:11:28 -- common/autotest_common.sh@1606 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:20:17.725 08:11:28 -- common/autotest_common.sh@1606 -- # sort -u 00:20:17.725 08:11:28 -- common/autotest_common.sh@1608 -- # cat 00:20:17.725 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:17.725 [2024-12-07 08:11:26.023589] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:17.725 [2024-12-07 08:11:26.023698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92803 ] 00:20:17.725 [2024-12-07 08:11:26.165698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.725 [2024-12-07 08:11:26.244983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.725 [2024-12-07 08:11:27.311860] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 78b70e58-05c5-4f34-8bd8-2f1cfe9da795 already exists 00:20:17.725 [2024-12-07 08:11:27.311916] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:78b70e58-05c5-4f34-8bd8-2f1cfe9da795 alias for bdev NVMe1n1 00:20:17.725 [2024-12-07 08:11:27.311952] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:17.725 Running I/O for 1 seconds... 00:20:17.725 00:20:17.725 Latency(us) 00:20:17.725 [2024-12-07T08:11:29.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.725 [2024-12-07T08:11:29.001Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:17.725 NVMe0n1 : 1.00 21791.64 85.12 0.00 0.00 5865.64 3306.59 10724.07 00:20:17.725 [2024-12-07T08:11:29.001Z] =================================================================================================================== 00:20:17.725 [2024-12-07T08:11:29.001Z] Total : 21791.64 85.12 0.00 0.00 5865.64 3306.59 10724.07 00:20:17.725 Received shutdown signal, test time was about 1.000000 seconds 00:20:17.725 00:20:17.725 Latency(us) 00:20:17.725 [2024-12-07T08:11:29.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.725 [2024-12-07T08:11:29.001Z] =================================================================================================================== 00:20:17.725 [2024-12-07T08:11:29.001Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:17.725 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:17.725 08:11:28 -- common/autotest_common.sh@1613 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:17.725 08:11:28 -- common/autotest_common.sh@1607 -- # read -r file 00:20:17.725 08:11:28 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:17.725 08:11:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:17.725 08:11:28 -- nvmf/common.sh@116 -- # sync 00:20:17.725 08:11:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:17.725 08:11:28 -- nvmf/common.sh@119 -- # set +e 00:20:17.725 08:11:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:17.725 08:11:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:17.725 rmmod nvme_tcp 00:20:17.725 rmmod nvme_fabrics 00:20:17.725 rmmod nvme_keyring 00:20:17.725 08:11:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:17.725 08:11:28 -- nvmf/common.sh@123 -- # set -e 00:20:17.725 08:11:28 -- nvmf/common.sh@124 -- # return 0 00:20:17.725 08:11:28 -- nvmf/common.sh@477 -- # '[' -n 92751 ']' 00:20:17.725 08:11:28 -- nvmf/common.sh@478 -- # killprocess 92751 00:20:17.725 08:11:28 -- common/autotest_common.sh@936 -- # '[' -z 92751 ']' 00:20:17.725 08:11:28 -- common/autotest_common.sh@940 -- # kill -0 92751 00:20:17.725 08:11:28 -- common/autotest_common.sh@941 -- # uname 00:20:17.725 08:11:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:17.725 08:11:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92751 00:20:17.725 08:11:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:17.725 08:11:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:17.725 killing process with pid 92751 00:20:17.725 08:11:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92751' 00:20:17.725 08:11:28 -- common/autotest_common.sh@955 -- # kill 92751 00:20:17.725 08:11:28 -- common/autotest_common.sh@960 -- # wait 92751 00:20:17.983 08:11:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:17.983 08:11:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:17.983 08:11:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:17.983 08:11:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:17.983 08:11:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:17.983 08:11:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.983 08:11:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:17.983 08:11:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.983 08:11:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:17.983 00:20:17.983 real 0m4.967s 00:20:17.983 user 0m15.682s 00:20:17.983 sys 0m1.053s 00:20:17.983 08:11:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:17.983 08:11:29 -- common/autotest_common.sh@10 -- # set +x 00:20:17.983 ************************************ 00:20:17.983 END TEST nvmf_multicontroller 00:20:17.983 ************************************ 00:20:17.983 08:11:29 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:17.983 08:11:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:17.983 08:11:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:17.983 08:11:29 -- common/autotest_common.sh@10 -- # set +x 00:20:17.983 ************************************ 00:20:17.983 START TEST nvmf_aer 00:20:17.983 ************************************ 00:20:17.983 08:11:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:18.241 * Looking for test storage... 00:20:18.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:18.241 08:11:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:18.241 08:11:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:18.241 08:11:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:18.241 08:11:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:18.241 08:11:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:18.241 08:11:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:18.241 08:11:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:18.241 08:11:29 -- scripts/common.sh@335 -- # IFS=.-: 00:20:18.241 08:11:29 -- scripts/common.sh@335 -- # read -ra ver1 00:20:18.241 08:11:29 -- scripts/common.sh@336 -- # IFS=.-: 00:20:18.241 08:11:29 -- scripts/common.sh@336 -- # read -ra ver2 00:20:18.241 08:11:29 -- scripts/common.sh@337 -- # local 'op=<' 00:20:18.241 08:11:29 -- scripts/common.sh@339 -- # ver1_l=2 00:20:18.241 08:11:29 -- scripts/common.sh@340 -- # ver2_l=1 00:20:18.241 08:11:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:18.241 08:11:29 -- scripts/common.sh@343 -- # case "$op" in 00:20:18.241 08:11:29 -- scripts/common.sh@344 -- # : 1 00:20:18.241 08:11:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:18.241 08:11:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:18.241 08:11:29 -- scripts/common.sh@364 -- # decimal 1 00:20:18.241 08:11:29 -- scripts/common.sh@352 -- # local d=1 00:20:18.241 08:11:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:18.241 08:11:29 -- scripts/common.sh@354 -- # echo 1 00:20:18.241 08:11:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:18.241 08:11:29 -- scripts/common.sh@365 -- # decimal 2 00:20:18.241 08:11:29 -- scripts/common.sh@352 -- # local d=2 00:20:18.241 08:11:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:18.241 08:11:29 -- scripts/common.sh@354 -- # echo 2 00:20:18.241 08:11:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:18.241 08:11:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:18.241 08:11:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:18.241 08:11:29 -- scripts/common.sh@367 -- # return 0 00:20:18.241 08:11:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:18.241 08:11:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:18.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.241 --rc genhtml_branch_coverage=1 00:20:18.241 --rc genhtml_function_coverage=1 00:20:18.241 --rc genhtml_legend=1 00:20:18.241 --rc geninfo_all_blocks=1 00:20:18.241 --rc geninfo_unexecuted_blocks=1 00:20:18.241 00:20:18.241 ' 00:20:18.241 08:11:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:18.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.241 --rc genhtml_branch_coverage=1 00:20:18.241 --rc genhtml_function_coverage=1 00:20:18.241 --rc genhtml_legend=1 00:20:18.241 --rc geninfo_all_blocks=1 00:20:18.241 --rc geninfo_unexecuted_blocks=1 00:20:18.241 00:20:18.241 ' 00:20:18.241 08:11:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:18.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.241 --rc genhtml_branch_coverage=1 00:20:18.241 --rc genhtml_function_coverage=1 00:20:18.241 --rc genhtml_legend=1 00:20:18.241 --rc geninfo_all_blocks=1 00:20:18.241 --rc geninfo_unexecuted_blocks=1 00:20:18.241 00:20:18.241 ' 00:20:18.241 08:11:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:18.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:18.241 --rc genhtml_branch_coverage=1 00:20:18.241 --rc genhtml_function_coverage=1 00:20:18.241 --rc genhtml_legend=1 00:20:18.241 --rc geninfo_all_blocks=1 00:20:18.241 --rc geninfo_unexecuted_blocks=1 00:20:18.241 00:20:18.241 ' 00:20:18.241 08:11:29 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:18.241 08:11:29 -- nvmf/common.sh@7 -- # uname -s 00:20:18.241 08:11:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.241 08:11:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.241 08:11:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.241 08:11:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.241 08:11:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.241 08:11:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.241 08:11:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.241 08:11:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.241 08:11:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.241 08:11:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.241 08:11:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:20:18.241 08:11:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:20:18.241 08:11:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.241 08:11:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.241 08:11:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:18.241 08:11:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:18.241 08:11:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.241 08:11:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.241 08:11:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.241 08:11:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.241 08:11:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.241 08:11:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.241 08:11:29 -- paths/export.sh@5 -- # export PATH 00:20:18.241 08:11:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.241 08:11:29 -- nvmf/common.sh@46 -- # : 0 00:20:18.241 08:11:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:18.241 08:11:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:18.241 08:11:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:18.241 08:11:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.241 08:11:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.241 08:11:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:18.241 08:11:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:18.241 08:11:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:18.241 08:11:29 -- host/aer.sh@11 -- # nvmftestinit 00:20:18.241 08:11:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:18.241 08:11:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.241 08:11:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:18.241 08:11:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:18.241 08:11:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:18.241 08:11:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.241 08:11:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.241 08:11:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.241 08:11:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:18.241 08:11:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:18.241 08:11:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:18.241 08:11:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:18.241 08:11:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:18.241 08:11:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:18.242 08:11:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.242 08:11:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.242 08:11:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:18.242 08:11:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:18.242 08:11:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:18.242 08:11:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:18.242 08:11:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:18.242 08:11:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.242 08:11:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:18.242 08:11:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:18.242 08:11:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:18.242 08:11:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:18.242 08:11:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:18.242 08:11:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:18.242 Cannot find device "nvmf_tgt_br" 00:20:18.242 08:11:29 -- nvmf/common.sh@154 -- # true 00:20:18.242 08:11:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:18.242 Cannot find device "nvmf_tgt_br2" 00:20:18.242 08:11:29 -- nvmf/common.sh@155 -- # true 00:20:18.242 08:11:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:18.242 08:11:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:18.242 Cannot find device "nvmf_tgt_br" 00:20:18.242 08:11:29 -- nvmf/common.sh@157 -- # true 00:20:18.242 08:11:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:18.242 Cannot find device "nvmf_tgt_br2" 00:20:18.242 08:11:29 -- nvmf/common.sh@158 -- # true 00:20:18.242 08:11:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:18.500 08:11:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:18.500 08:11:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:18.500 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:18.500 08:11:29 -- nvmf/common.sh@161 -- # true 00:20:18.500 08:11:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:18.500 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:18.500 08:11:29 -- nvmf/common.sh@162 -- # true 00:20:18.500 08:11:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:18.500 08:11:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:18.500 08:11:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:18.500 08:11:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:18.500 08:11:29 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:18.500 08:11:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:18.500 08:11:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:18.500 08:11:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:18.500 08:11:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:18.500 08:11:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:18.500 08:11:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:18.500 08:11:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:18.500 08:11:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:18.500 08:11:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:18.500 08:11:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:18.500 08:11:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:18.500 08:11:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:18.500 08:11:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:18.500 08:11:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:18.500 08:11:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:18.500 08:11:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:18.500 08:11:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:18.500 08:11:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:18.500 08:11:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:18.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:20:18.500 00:20:18.500 --- 10.0.0.2 ping statistics --- 00:20:18.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.500 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:18.500 08:11:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:18.500 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:18.500 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:20:18.500 00:20:18.500 --- 10.0.0.3 ping statistics --- 00:20:18.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.500 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:18.500 08:11:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:18.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:20:18.500 00:20:18.500 --- 10.0.0.1 ping statistics --- 00:20:18.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.501 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:20:18.501 08:11:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.501 08:11:29 -- nvmf/common.sh@421 -- # return 0 00:20:18.501 08:11:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:18.501 08:11:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.501 08:11:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:18.501 08:11:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:18.501 08:11:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.501 08:11:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:18.501 08:11:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:18.501 08:11:29 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:18.501 08:11:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:18.501 08:11:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:18.501 08:11:29 -- common/autotest_common.sh@10 -- # set +x 00:20:18.501 08:11:29 -- nvmf/common.sh@469 -- # nvmfpid=93059 00:20:18.501 08:11:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:18.501 08:11:29 -- nvmf/common.sh@470 -- # waitforlisten 93059 00:20:18.501 08:11:29 -- common/autotest_common.sh@829 -- # '[' -z 93059 ']' 00:20:18.501 08:11:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.501 08:11:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.501 08:11:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.501 08:11:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.501 08:11:29 -- common/autotest_common.sh@10 -- # set +x 00:20:18.759 [2024-12-07 08:11:29.814832] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:18.759 [2024-12-07 08:11:29.814946] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.759 [2024-12-07 08:11:29.949858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:19.018 [2024-12-07 08:11:30.034131] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:19.018 [2024-12-07 08:11:30.034359] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.018 [2024-12-07 08:11:30.034373] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.018 [2024-12-07 08:11:30.034382] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.018 [2024-12-07 08:11:30.034573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.018 [2024-12-07 08:11:30.034876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.018 [2024-12-07 08:11:30.035270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:19.018 [2024-12-07 08:11:30.035277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.586 08:11:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.586 08:11:30 -- common/autotest_common.sh@862 -- # return 0 00:20:19.586 08:11:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:19.586 08:11:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:19.586 08:11:30 -- common/autotest_common.sh@10 -- # set +x 00:20:19.586 08:11:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.586 08:11:30 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:19.586 08:11:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.586 08:11:30 -- common/autotest_common.sh@10 -- # set +x 00:20:19.586 [2024-12-07 08:11:30.823149] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.586 08:11:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.586 08:11:30 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:19.586 08:11:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.586 08:11:30 -- common/autotest_common.sh@10 -- # set +x 00:20:19.845 Malloc0 00:20:19.845 08:11:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.845 08:11:30 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:19.845 08:11:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.845 08:11:30 -- common/autotest_common.sh@10 -- # set +x 00:20:19.845 08:11:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.845 08:11:30 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:19.845 08:11:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.845 08:11:30 -- common/autotest_common.sh@10 -- # set +x 00:20:19.845 08:11:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.845 08:11:30 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:19.845 08:11:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.845 08:11:30 -- common/autotest_common.sh@10 -- # set +x 00:20:19.845 [2024-12-07 08:11:30.892885] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.845 08:11:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.845 08:11:30 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:19.845 08:11:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.845 08:11:30 -- common/autotest_common.sh@10 -- # set +x 00:20:19.845 [2024-12-07 08:11:30.900637] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:19.845 [ 00:20:19.845 { 00:20:19.845 "allow_any_host": true, 00:20:19.845 "hosts": [], 00:20:19.845 "listen_addresses": [], 00:20:19.845 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:19.845 "subtype": "Discovery" 00:20:19.845 }, 00:20:19.845 { 00:20:19.845 "allow_any_host": true, 00:20:19.845 "hosts": [], 00:20:19.845 "listen_addresses": [ 00:20:19.845 { 00:20:19.845 "adrfam": "IPv4", 00:20:19.845 "traddr": "10.0.0.2", 00:20:19.845 "transport": "TCP", 00:20:19.845 "trsvcid": "4420", 00:20:19.845 "trtype": "TCP" 00:20:19.845 } 00:20:19.845 ], 00:20:19.845 "max_cntlid": 65519, 00:20:19.845 "max_namespaces": 2, 00:20:19.845 "min_cntlid": 1, 00:20:19.845 "model_number": "SPDK bdev Controller", 00:20:19.845 "namespaces": [ 00:20:19.845 { 00:20:19.845 "bdev_name": "Malloc0", 00:20:19.845 "name": "Malloc0", 00:20:19.845 "nguid": "E7131DA6948E47189FCEA6967FC9968C", 00:20:19.845 "nsid": 1, 00:20:19.845 "uuid": "e7131da6-948e-4718-9fce-a6967fc9968c" 00:20:19.845 } 00:20:19.845 ], 00:20:19.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.845 "serial_number": "SPDK00000000000001", 00:20:19.845 "subtype": "NVMe" 00:20:19.845 } 00:20:19.845 ] 00:20:19.845 08:11:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.845 08:11:30 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:19.845 08:11:30 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:19.845 08:11:30 -- host/aer.sh@33 -- # aerpid=93113 00:20:19.845 08:11:30 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:19.845 08:11:30 -- common/autotest_common.sh@1254 -- # local i=0 00:20:19.845 08:11:30 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:19.845 08:11:30 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:19.845 08:11:30 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:20:19.845 08:11:30 -- common/autotest_common.sh@1257 -- # i=1 00:20:19.845 08:11:30 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:19.845 08:11:31 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:19.845 08:11:31 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:20:19.845 08:11:31 -- common/autotest_common.sh@1257 -- # i=2 00:20:19.845 08:11:31 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:20.104 08:11:31 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:20.104 08:11:31 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:20.104 08:11:31 -- common/autotest_common.sh@1265 -- # return 0 00:20:20.104 08:11:31 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:20.104 08:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.104 08:11:31 -- common/autotest_common.sh@10 -- # set +x 00:20:20.104 Malloc1 00:20:20.104 08:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.104 08:11:31 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:20.104 08:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.104 08:11:31 -- common/autotest_common.sh@10 -- # set +x 00:20:20.104 08:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.104 08:11:31 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:20.104 08:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.104 08:11:31 -- common/autotest_common.sh@10 -- # set +x 00:20:20.104 Asynchronous Event Request test 00:20:20.104 Attaching to 10.0.0.2 00:20:20.104 Attached to 10.0.0.2 00:20:20.104 Registering asynchronous event callbacks... 00:20:20.104 Starting namespace attribute notice tests for all controllers... 00:20:20.104 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:20.104 aer_cb - Changed Namespace 00:20:20.104 Cleaning up... 00:20:20.104 [ 00:20:20.104 { 00:20:20.104 "allow_any_host": true, 00:20:20.104 "hosts": [], 00:20:20.104 "listen_addresses": [], 00:20:20.104 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:20.104 "subtype": "Discovery" 00:20:20.104 }, 00:20:20.104 { 00:20:20.104 "allow_any_host": true, 00:20:20.104 "hosts": [], 00:20:20.104 "listen_addresses": [ 00:20:20.104 { 00:20:20.104 "adrfam": "IPv4", 00:20:20.104 "traddr": "10.0.0.2", 00:20:20.104 "transport": "TCP", 00:20:20.104 "trsvcid": "4420", 00:20:20.104 "trtype": "TCP" 00:20:20.104 } 00:20:20.104 ], 00:20:20.104 "max_cntlid": 65519, 00:20:20.104 "max_namespaces": 2, 00:20:20.104 "min_cntlid": 1, 00:20:20.104 "model_number": "SPDK bdev Controller", 00:20:20.104 "namespaces": [ 00:20:20.104 { 00:20:20.104 "bdev_name": "Malloc0", 00:20:20.104 "name": "Malloc0", 00:20:20.104 "nguid": "E7131DA6948E47189FCEA6967FC9968C", 00:20:20.104 "nsid": 1, 00:20:20.104 "uuid": "e7131da6-948e-4718-9fce-a6967fc9968c" 00:20:20.104 }, 00:20:20.104 { 00:20:20.104 "bdev_name": "Malloc1", 00:20:20.104 "name": "Malloc1", 00:20:20.104 "nguid": "4B8ED740DA9B4E3DA0E857BD160AA09E", 00:20:20.104 "nsid": 2, 00:20:20.104 "uuid": "4b8ed740-da9b-4e3d-a0e8-57bd160aa09e" 00:20:20.104 } 00:20:20.104 ], 00:20:20.104 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.104 "serial_number": "SPDK00000000000001", 00:20:20.104 "subtype": "NVMe" 00:20:20.104 } 00:20:20.104 ] 00:20:20.104 08:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.104 08:11:31 -- host/aer.sh@43 -- # wait 93113 00:20:20.104 08:11:31 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:20.104 08:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.104 08:11:31 -- common/autotest_common.sh@10 -- # set +x 00:20:20.104 08:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.104 08:11:31 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:20.104 08:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.104 08:11:31 -- common/autotest_common.sh@10 -- # set +x 00:20:20.104 08:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.104 08:11:31 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.104 08:11:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.104 08:11:31 -- common/autotest_common.sh@10 -- # set +x 00:20:20.104 08:11:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.104 08:11:31 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:20.104 08:11:31 -- host/aer.sh@51 -- # nvmftestfini 00:20:20.104 08:11:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:20.104 08:11:31 -- nvmf/common.sh@116 -- # sync 00:20:20.104 08:11:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:20.104 08:11:31 -- nvmf/common.sh@119 -- # set +e 00:20:20.104 08:11:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:20.104 08:11:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:20.104 rmmod nvme_tcp 00:20:20.104 rmmod nvme_fabrics 00:20:20.104 rmmod nvme_keyring 00:20:20.364 08:11:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:20.364 08:11:31 -- nvmf/common.sh@123 -- # set -e 00:20:20.364 08:11:31 -- nvmf/common.sh@124 -- # return 0 00:20:20.364 08:11:31 -- nvmf/common.sh@477 -- # '[' -n 93059 ']' 00:20:20.364 08:11:31 -- nvmf/common.sh@478 -- # killprocess 93059 00:20:20.364 08:11:31 -- common/autotest_common.sh@936 -- # '[' -z 93059 ']' 00:20:20.364 08:11:31 -- common/autotest_common.sh@940 -- # kill -0 93059 00:20:20.364 08:11:31 -- common/autotest_common.sh@941 -- # uname 00:20:20.364 08:11:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:20.364 08:11:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93059 00:20:20.364 08:11:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:20.365 08:11:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:20.365 08:11:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93059' 00:20:20.365 killing process with pid 93059 00:20:20.365 08:11:31 -- common/autotest_common.sh@955 -- # kill 93059 00:20:20.365 [2024-12-07 08:11:31.422608] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:20.365 08:11:31 -- common/autotest_common.sh@960 -- # wait 93059 00:20:20.365 08:11:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:20.365 08:11:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:20.365 08:11:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:20.365 08:11:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:20.365 08:11:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:20.365 08:11:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.365 08:11:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:20.365 08:11:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.624 08:11:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:20.624 00:20:20.624 real 0m2.428s 00:20:20.624 user 0m6.621s 00:20:20.624 sys 0m0.688s 00:20:20.624 08:11:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:20.624 08:11:31 -- common/autotest_common.sh@10 -- # set +x 00:20:20.624 ************************************ 00:20:20.624 END TEST nvmf_aer 00:20:20.624 ************************************ 00:20:20.624 08:11:31 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:20.624 08:11:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:20.624 08:11:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:20.624 08:11:31 -- common/autotest_common.sh@10 -- # set +x 00:20:20.624 ************************************ 00:20:20.624 START TEST nvmf_async_init 00:20:20.624 ************************************ 00:20:20.624 08:11:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:20.624 * Looking for test storage... 00:20:20.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:20.624 08:11:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:20.624 08:11:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:20.624 08:11:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:20.624 08:11:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:20.624 08:11:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:20.624 08:11:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:20.624 08:11:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:20.624 08:11:31 -- scripts/common.sh@335 -- # IFS=.-: 00:20:20.624 08:11:31 -- scripts/common.sh@335 -- # read -ra ver1 00:20:20.624 08:11:31 -- scripts/common.sh@336 -- # IFS=.-: 00:20:20.624 08:11:31 -- scripts/common.sh@336 -- # read -ra ver2 00:20:20.624 08:11:31 -- scripts/common.sh@337 -- # local 'op=<' 00:20:20.624 08:11:31 -- scripts/common.sh@339 -- # ver1_l=2 00:20:20.624 08:11:31 -- scripts/common.sh@340 -- # ver2_l=1 00:20:20.624 08:11:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:20.624 08:11:31 -- scripts/common.sh@343 -- # case "$op" in 00:20:20.624 08:11:31 -- scripts/common.sh@344 -- # : 1 00:20:20.624 08:11:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:20.624 08:11:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:20.624 08:11:31 -- scripts/common.sh@364 -- # decimal 1 00:20:20.624 08:11:31 -- scripts/common.sh@352 -- # local d=1 00:20:20.624 08:11:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:20.624 08:11:31 -- scripts/common.sh@354 -- # echo 1 00:20:20.624 08:11:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:20.624 08:11:31 -- scripts/common.sh@365 -- # decimal 2 00:20:20.624 08:11:31 -- scripts/common.sh@352 -- # local d=2 00:20:20.624 08:11:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:20.624 08:11:31 -- scripts/common.sh@354 -- # echo 2 00:20:20.624 08:11:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:20.624 08:11:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:20.624 08:11:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:20.624 08:11:31 -- scripts/common.sh@367 -- # return 0 00:20:20.624 08:11:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:20.624 08:11:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:20.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.624 --rc genhtml_branch_coverage=1 00:20:20.624 --rc genhtml_function_coverage=1 00:20:20.624 --rc genhtml_legend=1 00:20:20.624 --rc geninfo_all_blocks=1 00:20:20.624 --rc geninfo_unexecuted_blocks=1 00:20:20.624 00:20:20.624 ' 00:20:20.624 08:11:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:20.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.624 --rc genhtml_branch_coverage=1 00:20:20.624 --rc genhtml_function_coverage=1 00:20:20.624 --rc genhtml_legend=1 00:20:20.624 --rc geninfo_all_blocks=1 00:20:20.624 --rc geninfo_unexecuted_blocks=1 00:20:20.624 00:20:20.624 ' 00:20:20.624 08:11:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:20.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.624 --rc genhtml_branch_coverage=1 00:20:20.624 --rc genhtml_function_coverage=1 00:20:20.624 --rc genhtml_legend=1 00:20:20.624 --rc geninfo_all_blocks=1 00:20:20.624 --rc geninfo_unexecuted_blocks=1 00:20:20.624 00:20:20.624 ' 00:20:20.624 08:11:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:20.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.624 --rc genhtml_branch_coverage=1 00:20:20.624 --rc genhtml_function_coverage=1 00:20:20.624 --rc genhtml_legend=1 00:20:20.624 --rc geninfo_all_blocks=1 00:20:20.624 --rc geninfo_unexecuted_blocks=1 00:20:20.624 00:20:20.624 ' 00:20:20.624 08:11:31 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:20.624 08:11:31 -- nvmf/common.sh@7 -- # uname -s 00:20:20.624 08:11:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:20.624 08:11:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:20.624 08:11:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:20.624 08:11:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:20.624 08:11:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:20.625 08:11:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:20.625 08:11:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:20.884 08:11:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:20.884 08:11:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:20.884 08:11:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:20.884 08:11:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:20:20.884 08:11:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:20:20.884 08:11:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:20.884 08:11:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:20.884 08:11:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:20.884 08:11:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:20.884 08:11:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:20.884 08:11:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:20.884 08:11:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:20.884 08:11:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.884 08:11:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.884 08:11:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.884 08:11:31 -- paths/export.sh@5 -- # export PATH 00:20:20.884 08:11:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.884 08:11:31 -- nvmf/common.sh@46 -- # : 0 00:20:20.884 08:11:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:20.884 08:11:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:20.884 08:11:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:20.884 08:11:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:20.884 08:11:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:20.884 08:11:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:20.884 08:11:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:20.884 08:11:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:20.884 08:11:31 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:20.884 08:11:31 -- host/async_init.sh@14 -- # null_block_size=512 00:20:20.884 08:11:31 -- host/async_init.sh@15 -- # null_bdev=null0 00:20:20.884 08:11:31 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:20.884 08:11:31 -- host/async_init.sh@20 -- # tr -d - 00:20:20.884 08:11:31 -- host/async_init.sh@20 -- # uuidgen 00:20:20.884 08:11:31 -- host/async_init.sh@20 -- # nguid=3e76197893f94c65beff3f7c4727f317 00:20:20.884 08:11:31 -- host/async_init.sh@22 -- # nvmftestinit 00:20:20.884 08:11:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:20.884 08:11:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:20.884 08:11:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:20.884 08:11:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:20.884 08:11:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:20.884 08:11:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.884 08:11:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:20.884 08:11:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.884 08:11:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:20.884 08:11:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:20.884 08:11:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:20.884 08:11:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:20.884 08:11:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:20.884 08:11:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:20.884 08:11:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:20.884 08:11:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:20.884 08:11:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:20.884 08:11:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:20.884 08:11:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:20.884 08:11:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:20.884 08:11:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:20.884 08:11:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:20.884 08:11:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:20.884 08:11:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:20.884 08:11:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:20.884 08:11:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:20.884 08:11:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:20.884 08:11:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:20.884 Cannot find device "nvmf_tgt_br" 00:20:20.884 08:11:31 -- nvmf/common.sh@154 -- # true 00:20:20.884 08:11:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:20.884 Cannot find device "nvmf_tgt_br2" 00:20:20.884 08:11:31 -- nvmf/common.sh@155 -- # true 00:20:20.884 08:11:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:20.884 08:11:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:20.884 Cannot find device "nvmf_tgt_br" 00:20:20.884 08:11:31 -- nvmf/common.sh@157 -- # true 00:20:20.884 08:11:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:20.884 Cannot find device "nvmf_tgt_br2" 00:20:20.884 08:11:31 -- nvmf/common.sh@158 -- # true 00:20:20.884 08:11:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:20.884 08:11:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:20.884 08:11:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:20.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:20.884 08:11:32 -- nvmf/common.sh@161 -- # true 00:20:20.884 08:11:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:20.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:20.884 08:11:32 -- nvmf/common.sh@162 -- # true 00:20:20.884 08:11:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:20.884 08:11:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:20.884 08:11:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:20.884 08:11:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:20.884 08:11:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:20.884 08:11:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:20.884 08:11:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:20.884 08:11:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:20.884 08:11:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:20.884 08:11:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:20.884 08:11:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:20.884 08:11:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:20.885 08:11:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:20.885 08:11:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:21.143 08:11:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:21.143 08:11:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:21.143 08:11:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:21.143 08:11:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:21.143 08:11:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:21.144 08:11:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:21.144 08:11:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:21.144 08:11:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:21.144 08:11:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:21.144 08:11:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:21.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:20:21.144 00:20:21.144 --- 10.0.0.2 ping statistics --- 00:20:21.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.144 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:20:21.144 08:11:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:21.144 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:21.144 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:20:21.144 00:20:21.144 --- 10.0.0.3 ping statistics --- 00:20:21.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.144 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:21.144 08:11:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:21.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:20:21.144 00:20:21.144 --- 10.0.0.1 ping statistics --- 00:20:21.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.144 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:20:21.144 08:11:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.144 08:11:32 -- nvmf/common.sh@421 -- # return 0 00:20:21.144 08:11:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:21.144 08:11:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.144 08:11:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:21.144 08:11:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:21.144 08:11:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.144 08:11:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:21.144 08:11:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:21.144 08:11:32 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:21.144 08:11:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:21.144 08:11:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:21.144 08:11:32 -- common/autotest_common.sh@10 -- # set +x 00:20:21.144 08:11:32 -- nvmf/common.sh@469 -- # nvmfpid=93290 00:20:21.144 08:11:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:21.144 08:11:32 -- nvmf/common.sh@470 -- # waitforlisten 93290 00:20:21.144 08:11:32 -- common/autotest_common.sh@829 -- # '[' -z 93290 ']' 00:20:21.144 08:11:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.144 08:11:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.144 08:11:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.144 08:11:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.144 08:11:32 -- common/autotest_common.sh@10 -- # set +x 00:20:21.144 [2024-12-07 08:11:32.310914] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:21.144 [2024-12-07 08:11:32.310999] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.402 [2024-12-07 08:11:32.444407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.402 [2024-12-07 08:11:32.516659] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:21.402 [2024-12-07 08:11:32.516795] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.402 [2024-12-07 08:11:32.516807] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.402 [2024-12-07 08:11:32.516815] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.402 [2024-12-07 08:11:32.516843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.339 08:11:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.339 08:11:33 -- common/autotest_common.sh@862 -- # return 0 00:20:22.339 08:11:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:22.339 08:11:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:22.339 08:11:33 -- common/autotest_common.sh@10 -- # set +x 00:20:22.339 08:11:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.339 08:11:33 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:22.339 08:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.339 08:11:33 -- common/autotest_common.sh@10 -- # set +x 00:20:22.339 [2024-12-07 08:11:33.336304] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.339 08:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.339 08:11:33 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:22.339 08:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.339 08:11:33 -- common/autotest_common.sh@10 -- # set +x 00:20:22.339 null0 00:20:22.339 08:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.339 08:11:33 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:22.339 08:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.339 08:11:33 -- common/autotest_common.sh@10 -- # set +x 00:20:22.339 08:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.339 08:11:33 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:22.339 08:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.339 08:11:33 -- common/autotest_common.sh@10 -- # set +x 00:20:22.339 08:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.339 08:11:33 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3e76197893f94c65beff3f7c4727f317 00:20:22.339 08:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.339 08:11:33 -- common/autotest_common.sh@10 -- # set +x 00:20:22.339 08:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.339 08:11:33 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:22.339 08:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.339 08:11:33 -- common/autotest_common.sh@10 -- # set +x 00:20:22.339 [2024-12-07 08:11:33.376378] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.339 08:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.339 08:11:33 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:22.339 08:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.339 08:11:33 -- common/autotest_common.sh@10 -- # set +x 00:20:22.599 nvme0n1 00:20:22.599 08:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.599 08:11:33 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:22.599 08:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.599 08:11:33 -- common/autotest_common.sh@10 -- # set +x 00:20:22.599 [ 00:20:22.599 { 00:20:22.599 "aliases": [ 00:20:22.599 "3e761978-93f9-4c65-beff-3f7c4727f317" 00:20:22.599 ], 00:20:22.599 "assigned_rate_limits": { 00:20:22.599 "r_mbytes_per_sec": 0, 00:20:22.599 "rw_ios_per_sec": 0, 00:20:22.599 "rw_mbytes_per_sec": 0, 00:20:22.599 "w_mbytes_per_sec": 0 00:20:22.599 }, 00:20:22.599 "block_size": 512, 00:20:22.599 "claimed": false, 00:20:22.599 "driver_specific": { 00:20:22.599 "mp_policy": "active_passive", 00:20:22.599 "nvme": [ 00:20:22.599 { 00:20:22.599 "ctrlr_data": { 00:20:22.599 "ana_reporting": false, 00:20:22.599 "cntlid": 1, 00:20:22.599 "firmware_revision": "24.01.1", 00:20:22.599 "model_number": "SPDK bdev Controller", 00:20:22.599 "multi_ctrlr": true, 00:20:22.599 "oacs": { 00:20:22.599 "firmware": 0, 00:20:22.599 "format": 0, 00:20:22.599 "ns_manage": 0, 00:20:22.599 "security": 0 00:20:22.599 }, 00:20:22.599 "serial_number": "00000000000000000000", 00:20:22.599 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:22.599 "vendor_id": "0x8086" 00:20:22.599 }, 00:20:22.599 "ns_data": { 00:20:22.599 "can_share": true, 00:20:22.599 "id": 1 00:20:22.599 }, 00:20:22.599 "trid": { 00:20:22.599 "adrfam": "IPv4", 00:20:22.599 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:22.599 "traddr": "10.0.0.2", 00:20:22.599 "trsvcid": "4420", 00:20:22.599 "trtype": "TCP" 00:20:22.599 }, 00:20:22.599 "vs": { 00:20:22.599 "nvme_version": "1.3" 00:20:22.599 } 00:20:22.599 } 00:20:22.599 ] 00:20:22.599 }, 00:20:22.599 "name": "nvme0n1", 00:20:22.599 "num_blocks": 2097152, 00:20:22.599 "product_name": "NVMe disk", 00:20:22.599 "supported_io_types": { 00:20:22.599 "abort": true, 00:20:22.599 "compare": true, 00:20:22.599 "compare_and_write": true, 00:20:22.599 "flush": true, 00:20:22.599 "nvme_admin": true, 00:20:22.599 "nvme_io": true, 00:20:22.599 "read": true, 00:20:22.599 "reset": true, 00:20:22.599 "unmap": false, 00:20:22.599 "write": true, 00:20:22.599 "write_zeroes": true 00:20:22.599 }, 00:20:22.599 "uuid": "3e761978-93f9-4c65-beff-3f7c4727f317", 00:20:22.599 "zoned": false 00:20:22.599 } 00:20:22.599 ] 00:20:22.599 08:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.599 08:11:33 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:22.599 08:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.599 08:11:33 -- common/autotest_common.sh@10 -- # set +x 00:20:22.599 [2024-12-07 08:11:33.653772] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:22.599 [2024-12-07 08:11:33.653857] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2496a00 (9): Bad file descriptor 00:20:22.599 [2024-12-07 08:11:33.785301] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:22.599 08:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.599 08:11:33 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:22.599 08:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.599 08:11:33 -- common/autotest_common.sh@10 -- # set +x 00:20:22.599 [ 00:20:22.599 { 00:20:22.599 "aliases": [ 00:20:22.599 "3e761978-93f9-4c65-beff-3f7c4727f317" 00:20:22.599 ], 00:20:22.599 "assigned_rate_limits": { 00:20:22.599 "r_mbytes_per_sec": 0, 00:20:22.599 "rw_ios_per_sec": 0, 00:20:22.599 "rw_mbytes_per_sec": 0, 00:20:22.599 "w_mbytes_per_sec": 0 00:20:22.599 }, 00:20:22.599 "block_size": 512, 00:20:22.599 "claimed": false, 00:20:22.599 "driver_specific": { 00:20:22.599 "mp_policy": "active_passive", 00:20:22.599 "nvme": [ 00:20:22.599 { 00:20:22.599 "ctrlr_data": { 00:20:22.599 "ana_reporting": false, 00:20:22.599 "cntlid": 2, 00:20:22.599 "firmware_revision": "24.01.1", 00:20:22.600 "model_number": "SPDK bdev Controller", 00:20:22.600 "multi_ctrlr": true, 00:20:22.600 "oacs": { 00:20:22.600 "firmware": 0, 00:20:22.600 "format": 0, 00:20:22.600 "ns_manage": 0, 00:20:22.600 "security": 0 00:20:22.600 }, 00:20:22.600 "serial_number": "00000000000000000000", 00:20:22.600 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:22.600 "vendor_id": "0x8086" 00:20:22.600 }, 00:20:22.600 "ns_data": { 00:20:22.600 "can_share": true, 00:20:22.600 "id": 1 00:20:22.600 }, 00:20:22.600 "trid": { 00:20:22.600 "adrfam": "IPv4", 00:20:22.600 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:22.600 "traddr": "10.0.0.2", 00:20:22.600 "trsvcid": "4420", 00:20:22.600 "trtype": "TCP" 00:20:22.600 }, 00:20:22.600 "vs": { 00:20:22.600 "nvme_version": "1.3" 00:20:22.600 } 00:20:22.600 } 00:20:22.600 ] 00:20:22.600 }, 00:20:22.600 "name": "nvme0n1", 00:20:22.600 "num_blocks": 2097152, 00:20:22.600 "product_name": "NVMe disk", 00:20:22.600 "supported_io_types": { 00:20:22.600 "abort": true, 00:20:22.600 "compare": true, 00:20:22.600 "compare_and_write": true, 00:20:22.600 "flush": true, 00:20:22.600 "nvme_admin": true, 00:20:22.600 "nvme_io": true, 00:20:22.600 "read": true, 00:20:22.600 "reset": true, 00:20:22.600 "unmap": false, 00:20:22.600 "write": true, 00:20:22.600 "write_zeroes": true 00:20:22.600 }, 00:20:22.600 "uuid": "3e761978-93f9-4c65-beff-3f7c4727f317", 00:20:22.600 "zoned": false 00:20:22.600 } 00:20:22.600 ] 00:20:22.600 08:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.600 08:11:33 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.600 08:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.600 08:11:33 -- common/autotest_common.sh@10 -- # set +x 00:20:22.600 08:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.600 08:11:33 -- host/async_init.sh@53 -- # mktemp 00:20:22.600 08:11:33 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.fnvGyd8I91 00:20:22.600 08:11:33 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:22.600 08:11:33 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.fnvGyd8I91 00:20:22.600 08:11:33 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:22.600 08:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.600 08:11:33 -- common/autotest_common.sh@10 -- # set +x 00:20:22.600 08:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.600 08:11:33 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:22.600 08:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.600 08:11:33 -- common/autotest_common.sh@10 -- # set +x 00:20:22.600 [2024-12-07 08:11:33.841899] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:22.600 [2024-12-07 08:11:33.842086] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:22.600 08:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.600 08:11:33 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fnvGyd8I91 00:20:22.600 08:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.600 08:11:33 -- common/autotest_common.sh@10 -- # set +x 00:20:22.600 08:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.600 08:11:33 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fnvGyd8I91 00:20:22.600 08:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.600 08:11:33 -- common/autotest_common.sh@10 -- # set +x 00:20:22.600 [2024-12-07 08:11:33.861932] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:22.859 nvme0n1 00:20:22.859 08:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.859 08:11:33 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:22.859 08:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.859 08:11:33 -- common/autotest_common.sh@10 -- # set +x 00:20:22.859 [ 00:20:22.859 { 00:20:22.859 "aliases": [ 00:20:22.859 "3e761978-93f9-4c65-beff-3f7c4727f317" 00:20:22.859 ], 00:20:22.859 "assigned_rate_limits": { 00:20:22.859 "r_mbytes_per_sec": 0, 00:20:22.859 "rw_ios_per_sec": 0, 00:20:22.859 "rw_mbytes_per_sec": 0, 00:20:22.859 "w_mbytes_per_sec": 0 00:20:22.859 }, 00:20:22.859 "block_size": 512, 00:20:22.859 "claimed": false, 00:20:22.859 "driver_specific": { 00:20:22.859 "mp_policy": "active_passive", 00:20:22.859 "nvme": [ 00:20:22.859 { 00:20:22.859 "ctrlr_data": { 00:20:22.859 "ana_reporting": false, 00:20:22.859 "cntlid": 3, 00:20:22.859 "firmware_revision": "24.01.1", 00:20:22.859 "model_number": "SPDK bdev Controller", 00:20:22.859 "multi_ctrlr": true, 00:20:22.859 "oacs": { 00:20:22.859 "firmware": 0, 00:20:22.859 "format": 0, 00:20:22.859 "ns_manage": 0, 00:20:22.859 "security": 0 00:20:22.859 }, 00:20:22.859 "serial_number": "00000000000000000000", 00:20:22.859 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:22.859 "vendor_id": "0x8086" 00:20:22.859 }, 00:20:22.859 "ns_data": { 00:20:22.859 "can_share": true, 00:20:22.859 "id": 1 00:20:22.859 }, 00:20:22.859 "trid": { 00:20:22.859 "adrfam": "IPv4", 00:20:22.859 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:22.859 "traddr": "10.0.0.2", 00:20:22.859 "trsvcid": "4421", 00:20:22.859 "trtype": "TCP" 00:20:22.859 }, 00:20:22.859 "vs": { 00:20:22.859 "nvme_version": "1.3" 00:20:22.859 } 00:20:22.859 } 00:20:22.859 ] 00:20:22.859 }, 00:20:22.859 "name": "nvme0n1", 00:20:22.859 "num_blocks": 2097152, 00:20:22.859 "product_name": "NVMe disk", 00:20:22.860 "supported_io_types": { 00:20:22.860 "abort": true, 00:20:22.860 "compare": true, 00:20:22.860 "compare_and_write": true, 00:20:22.860 "flush": true, 00:20:22.860 "nvme_admin": true, 00:20:22.860 "nvme_io": true, 00:20:22.860 "read": true, 00:20:22.860 "reset": true, 00:20:22.860 "unmap": false, 00:20:22.860 "write": true, 00:20:22.860 "write_zeroes": true 00:20:22.860 }, 00:20:22.860 "uuid": "3e761978-93f9-4c65-beff-3f7c4727f317", 00:20:22.860 "zoned": false 00:20:22.860 } 00:20:22.860 ] 00:20:22.860 08:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.860 08:11:33 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.860 08:11:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.860 08:11:33 -- common/autotest_common.sh@10 -- # set +x 00:20:22.860 08:11:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.860 08:11:33 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.fnvGyd8I91 00:20:22.860 08:11:33 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:22.860 08:11:33 -- host/async_init.sh@78 -- # nvmftestfini 00:20:22.860 08:11:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:22.860 08:11:33 -- nvmf/common.sh@116 -- # sync 00:20:22.860 08:11:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:22.860 08:11:34 -- nvmf/common.sh@119 -- # set +e 00:20:22.860 08:11:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:22.860 08:11:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:22.860 rmmod nvme_tcp 00:20:22.860 rmmod nvme_fabrics 00:20:22.860 rmmod nvme_keyring 00:20:22.860 08:11:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:22.860 08:11:34 -- nvmf/common.sh@123 -- # set -e 00:20:22.860 08:11:34 -- nvmf/common.sh@124 -- # return 0 00:20:22.860 08:11:34 -- nvmf/common.sh@477 -- # '[' -n 93290 ']' 00:20:22.860 08:11:34 -- nvmf/common.sh@478 -- # killprocess 93290 00:20:22.860 08:11:34 -- common/autotest_common.sh@936 -- # '[' -z 93290 ']' 00:20:22.860 08:11:34 -- common/autotest_common.sh@940 -- # kill -0 93290 00:20:22.860 08:11:34 -- common/autotest_common.sh@941 -- # uname 00:20:22.860 08:11:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:22.860 08:11:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93290 00:20:22.860 08:11:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:22.860 08:11:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:22.860 08:11:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93290' 00:20:22.860 killing process with pid 93290 00:20:22.860 08:11:34 -- common/autotest_common.sh@955 -- # kill 93290 00:20:22.860 08:11:34 -- common/autotest_common.sh@960 -- # wait 93290 00:20:23.119 08:11:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:23.119 08:11:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:23.119 08:11:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:23.119 08:11:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:23.119 08:11:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:23.119 08:11:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.119 08:11:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.119 08:11:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.119 08:11:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:23.119 00:20:23.119 real 0m2.604s 00:20:23.119 user 0m2.433s 00:20:23.119 sys 0m0.619s 00:20:23.119 08:11:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:23.119 08:11:34 -- common/autotest_common.sh@10 -- # set +x 00:20:23.119 ************************************ 00:20:23.119 END TEST nvmf_async_init 00:20:23.119 ************************************ 00:20:23.119 08:11:34 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:23.119 08:11:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:23.119 08:11:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:23.119 08:11:34 -- common/autotest_common.sh@10 -- # set +x 00:20:23.119 ************************************ 00:20:23.119 START TEST dma 00:20:23.119 ************************************ 00:20:23.119 08:11:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:23.379 * Looking for test storage... 00:20:23.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:23.379 08:11:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:23.379 08:11:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:23.379 08:11:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:23.379 08:11:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:23.379 08:11:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:23.379 08:11:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:23.379 08:11:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:23.379 08:11:34 -- scripts/common.sh@335 -- # IFS=.-: 00:20:23.379 08:11:34 -- scripts/common.sh@335 -- # read -ra ver1 00:20:23.379 08:11:34 -- scripts/common.sh@336 -- # IFS=.-: 00:20:23.379 08:11:34 -- scripts/common.sh@336 -- # read -ra ver2 00:20:23.379 08:11:34 -- scripts/common.sh@337 -- # local 'op=<' 00:20:23.379 08:11:34 -- scripts/common.sh@339 -- # ver1_l=2 00:20:23.379 08:11:34 -- scripts/common.sh@340 -- # ver2_l=1 00:20:23.379 08:11:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:23.379 08:11:34 -- scripts/common.sh@343 -- # case "$op" in 00:20:23.379 08:11:34 -- scripts/common.sh@344 -- # : 1 00:20:23.379 08:11:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:23.379 08:11:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:23.379 08:11:34 -- scripts/common.sh@364 -- # decimal 1 00:20:23.379 08:11:34 -- scripts/common.sh@352 -- # local d=1 00:20:23.379 08:11:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:23.379 08:11:34 -- scripts/common.sh@354 -- # echo 1 00:20:23.379 08:11:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:23.379 08:11:34 -- scripts/common.sh@365 -- # decimal 2 00:20:23.379 08:11:34 -- scripts/common.sh@352 -- # local d=2 00:20:23.379 08:11:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:23.379 08:11:34 -- scripts/common.sh@354 -- # echo 2 00:20:23.379 08:11:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:23.379 08:11:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:23.379 08:11:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:23.379 08:11:34 -- scripts/common.sh@367 -- # return 0 00:20:23.379 08:11:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:23.379 08:11:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:23.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.379 --rc genhtml_branch_coverage=1 00:20:23.379 --rc genhtml_function_coverage=1 00:20:23.379 --rc genhtml_legend=1 00:20:23.379 --rc geninfo_all_blocks=1 00:20:23.379 --rc geninfo_unexecuted_blocks=1 00:20:23.379 00:20:23.379 ' 00:20:23.379 08:11:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:23.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.379 --rc genhtml_branch_coverage=1 00:20:23.379 --rc genhtml_function_coverage=1 00:20:23.379 --rc genhtml_legend=1 00:20:23.379 --rc geninfo_all_blocks=1 00:20:23.379 --rc geninfo_unexecuted_blocks=1 00:20:23.379 00:20:23.379 ' 00:20:23.379 08:11:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:23.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.379 --rc genhtml_branch_coverage=1 00:20:23.379 --rc genhtml_function_coverage=1 00:20:23.379 --rc genhtml_legend=1 00:20:23.379 --rc geninfo_all_blocks=1 00:20:23.379 --rc geninfo_unexecuted_blocks=1 00:20:23.379 00:20:23.379 ' 00:20:23.379 08:11:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:23.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.379 --rc genhtml_branch_coverage=1 00:20:23.380 --rc genhtml_function_coverage=1 00:20:23.380 --rc genhtml_legend=1 00:20:23.380 --rc geninfo_all_blocks=1 00:20:23.380 --rc geninfo_unexecuted_blocks=1 00:20:23.380 00:20:23.380 ' 00:20:23.380 08:11:34 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:23.380 08:11:34 -- nvmf/common.sh@7 -- # uname -s 00:20:23.380 08:11:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.380 08:11:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.380 08:11:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.380 08:11:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.380 08:11:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.380 08:11:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.380 08:11:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.380 08:11:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.380 08:11:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.380 08:11:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.380 08:11:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:20:23.380 08:11:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:20:23.380 08:11:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.380 08:11:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.380 08:11:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:23.380 08:11:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:23.380 08:11:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.380 08:11:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.380 08:11:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.380 08:11:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.380 08:11:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.380 08:11:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.380 08:11:34 -- paths/export.sh@5 -- # export PATH 00:20:23.380 08:11:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.380 08:11:34 -- nvmf/common.sh@46 -- # : 0 00:20:23.380 08:11:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:23.380 08:11:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:23.380 08:11:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:23.380 08:11:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.380 08:11:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.380 08:11:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:23.380 08:11:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:23.380 08:11:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:23.380 08:11:34 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:23.380 08:11:34 -- host/dma.sh@13 -- # exit 0 00:20:23.380 00:20:23.380 real 0m0.206s 00:20:23.380 user 0m0.141s 00:20:23.380 sys 0m0.076s 00:20:23.380 08:11:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:23.380 08:11:34 -- common/autotest_common.sh@10 -- # set +x 00:20:23.380 ************************************ 00:20:23.380 END TEST dma 00:20:23.380 ************************************ 00:20:23.380 08:11:34 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:23.380 08:11:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:23.380 08:11:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:23.380 08:11:34 -- common/autotest_common.sh@10 -- # set +x 00:20:23.380 ************************************ 00:20:23.380 START TEST nvmf_identify 00:20:23.380 ************************************ 00:20:23.380 08:11:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:23.640 * Looking for test storage... 00:20:23.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:23.640 08:11:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:23.640 08:11:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:23.640 08:11:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:23.640 08:11:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:23.640 08:11:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:23.640 08:11:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:23.640 08:11:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:23.640 08:11:34 -- scripts/common.sh@335 -- # IFS=.-: 00:20:23.640 08:11:34 -- scripts/common.sh@335 -- # read -ra ver1 00:20:23.640 08:11:34 -- scripts/common.sh@336 -- # IFS=.-: 00:20:23.640 08:11:34 -- scripts/common.sh@336 -- # read -ra ver2 00:20:23.640 08:11:34 -- scripts/common.sh@337 -- # local 'op=<' 00:20:23.640 08:11:34 -- scripts/common.sh@339 -- # ver1_l=2 00:20:23.640 08:11:34 -- scripts/common.sh@340 -- # ver2_l=1 00:20:23.640 08:11:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:23.640 08:11:34 -- scripts/common.sh@343 -- # case "$op" in 00:20:23.640 08:11:34 -- scripts/common.sh@344 -- # : 1 00:20:23.640 08:11:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:23.640 08:11:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:23.640 08:11:34 -- scripts/common.sh@364 -- # decimal 1 00:20:23.640 08:11:34 -- scripts/common.sh@352 -- # local d=1 00:20:23.640 08:11:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:23.640 08:11:34 -- scripts/common.sh@354 -- # echo 1 00:20:23.640 08:11:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:23.640 08:11:34 -- scripts/common.sh@365 -- # decimal 2 00:20:23.640 08:11:34 -- scripts/common.sh@352 -- # local d=2 00:20:23.640 08:11:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:23.640 08:11:34 -- scripts/common.sh@354 -- # echo 2 00:20:23.640 08:11:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:23.640 08:11:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:23.640 08:11:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:23.640 08:11:34 -- scripts/common.sh@367 -- # return 0 00:20:23.640 08:11:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:23.640 08:11:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:23.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.640 --rc genhtml_branch_coverage=1 00:20:23.640 --rc genhtml_function_coverage=1 00:20:23.640 --rc genhtml_legend=1 00:20:23.640 --rc geninfo_all_blocks=1 00:20:23.640 --rc geninfo_unexecuted_blocks=1 00:20:23.640 00:20:23.640 ' 00:20:23.640 08:11:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:23.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.640 --rc genhtml_branch_coverage=1 00:20:23.640 --rc genhtml_function_coverage=1 00:20:23.640 --rc genhtml_legend=1 00:20:23.640 --rc geninfo_all_blocks=1 00:20:23.640 --rc geninfo_unexecuted_blocks=1 00:20:23.640 00:20:23.640 ' 00:20:23.640 08:11:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:23.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.640 --rc genhtml_branch_coverage=1 00:20:23.640 --rc genhtml_function_coverage=1 00:20:23.640 --rc genhtml_legend=1 00:20:23.640 --rc geninfo_all_blocks=1 00:20:23.640 --rc geninfo_unexecuted_blocks=1 00:20:23.640 00:20:23.640 ' 00:20:23.640 08:11:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:23.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.640 --rc genhtml_branch_coverage=1 00:20:23.640 --rc genhtml_function_coverage=1 00:20:23.640 --rc genhtml_legend=1 00:20:23.640 --rc geninfo_all_blocks=1 00:20:23.640 --rc geninfo_unexecuted_blocks=1 00:20:23.640 00:20:23.640 ' 00:20:23.640 08:11:34 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:23.640 08:11:34 -- nvmf/common.sh@7 -- # uname -s 00:20:23.640 08:11:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.640 08:11:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.640 08:11:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.640 08:11:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.640 08:11:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.640 08:11:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.640 08:11:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.640 08:11:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.640 08:11:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.640 08:11:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.640 08:11:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:20:23.640 08:11:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:20:23.640 08:11:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.640 08:11:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.640 08:11:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:23.640 08:11:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:23.640 08:11:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.640 08:11:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.640 08:11:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.640 08:11:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.640 08:11:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.640 08:11:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.640 08:11:34 -- paths/export.sh@5 -- # export PATH 00:20:23.640 08:11:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.640 08:11:34 -- nvmf/common.sh@46 -- # : 0 00:20:23.640 08:11:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:23.640 08:11:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:23.640 08:11:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:23.640 08:11:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.640 08:11:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.640 08:11:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:23.640 08:11:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:23.640 08:11:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:23.640 08:11:34 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:23.640 08:11:34 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:23.640 08:11:34 -- host/identify.sh@14 -- # nvmftestinit 00:20:23.641 08:11:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:23.641 08:11:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.641 08:11:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:23.641 08:11:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:23.641 08:11:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:23.641 08:11:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.641 08:11:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.641 08:11:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.641 08:11:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:23.641 08:11:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:23.641 08:11:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:23.641 08:11:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:23.641 08:11:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:23.641 08:11:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:23.641 08:11:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.641 08:11:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.641 08:11:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:23.641 08:11:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:23.641 08:11:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:23.641 08:11:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:23.641 08:11:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:23.641 08:11:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.641 08:11:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:23.641 08:11:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:23.641 08:11:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:23.641 08:11:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:23.641 08:11:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:23.641 08:11:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:23.641 Cannot find device "nvmf_tgt_br" 00:20:23.641 08:11:34 -- nvmf/common.sh@154 -- # true 00:20:23.641 08:11:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:23.641 Cannot find device "nvmf_tgt_br2" 00:20:23.641 08:11:34 -- nvmf/common.sh@155 -- # true 00:20:23.641 08:11:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:23.641 08:11:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:23.641 Cannot find device "nvmf_tgt_br" 00:20:23.641 08:11:34 -- nvmf/common.sh@157 -- # true 00:20:23.641 08:11:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:23.641 Cannot find device "nvmf_tgt_br2" 00:20:23.641 08:11:34 -- nvmf/common.sh@158 -- # true 00:20:23.641 08:11:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:23.900 08:11:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:23.900 08:11:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:23.900 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:23.900 08:11:34 -- nvmf/common.sh@161 -- # true 00:20:23.900 08:11:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:23.900 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:23.900 08:11:34 -- nvmf/common.sh@162 -- # true 00:20:23.900 08:11:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:23.900 08:11:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:23.900 08:11:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:23.900 08:11:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:23.900 08:11:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:23.900 08:11:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:23.900 08:11:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:23.900 08:11:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:23.900 08:11:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:23.900 08:11:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:23.900 08:11:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:23.900 08:11:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:23.900 08:11:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:23.900 08:11:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:23.900 08:11:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:23.900 08:11:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:23.900 08:11:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:23.900 08:11:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:23.900 08:11:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:23.900 08:11:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:23.900 08:11:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:23.900 08:11:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:23.900 08:11:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:23.900 08:11:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:23.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:20:23.900 00:20:23.900 --- 10.0.0.2 ping statistics --- 00:20:23.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.900 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:20:23.900 08:11:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:23.900 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:23.900 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:20:23.900 00:20:23.900 --- 10.0.0.3 ping statistics --- 00:20:23.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.900 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:20:23.900 08:11:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:23.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:20:23.900 00:20:23.900 --- 10.0.0.1 ping statistics --- 00:20:23.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.900 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:20:23.900 08:11:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.900 08:11:35 -- nvmf/common.sh@421 -- # return 0 00:20:23.900 08:11:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:23.900 08:11:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.900 08:11:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:23.900 08:11:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:23.900 08:11:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.900 08:11:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:23.900 08:11:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:24.162 08:11:35 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:24.162 08:11:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:24.162 08:11:35 -- common/autotest_common.sh@10 -- # set +x 00:20:24.162 08:11:35 -- host/identify.sh@19 -- # nvmfpid=93574 00:20:24.162 08:11:35 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:24.162 08:11:35 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:24.162 08:11:35 -- host/identify.sh@23 -- # waitforlisten 93574 00:20:24.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.162 08:11:35 -- common/autotest_common.sh@829 -- # '[' -z 93574 ']' 00:20:24.162 08:11:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.162 08:11:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:24.162 08:11:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.162 08:11:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:24.162 08:11:35 -- common/autotest_common.sh@10 -- # set +x 00:20:24.162 [2024-12-07 08:11:35.229945] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:24.162 [2024-12-07 08:11:35.230057] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.162 [2024-12-07 08:11:35.364615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:24.423 [2024-12-07 08:11:35.437820] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:24.423 [2024-12-07 08:11:35.437989] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.423 [2024-12-07 08:11:35.438002] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.423 [2024-12-07 08:11:35.438011] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.423 [2024-12-07 08:11:35.441245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.423 [2024-12-07 08:11:35.441417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.423 [2024-12-07 08:11:35.441547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:24.423 [2024-12-07 08:11:35.441552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.358 08:11:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:25.358 08:11:36 -- common/autotest_common.sh@862 -- # return 0 00:20:25.358 08:11:36 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:25.358 08:11:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.358 08:11:36 -- common/autotest_common.sh@10 -- # set +x 00:20:25.358 [2024-12-07 08:11:36.287869] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.358 08:11:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.358 08:11:36 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:25.358 08:11:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:25.358 08:11:36 -- common/autotest_common.sh@10 -- # set +x 00:20:25.358 08:11:36 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:25.358 08:11:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.359 08:11:36 -- common/autotest_common.sh@10 -- # set +x 00:20:25.359 Malloc0 00:20:25.359 08:11:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.359 08:11:36 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:25.359 08:11:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.359 08:11:36 -- common/autotest_common.sh@10 -- # set +x 00:20:25.359 08:11:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.359 08:11:36 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:25.359 08:11:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.359 08:11:36 -- common/autotest_common.sh@10 -- # set +x 00:20:25.359 08:11:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.359 08:11:36 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:25.359 08:11:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.359 08:11:36 -- common/autotest_common.sh@10 -- # set +x 00:20:25.359 [2024-12-07 08:11:36.395013] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.359 08:11:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.359 08:11:36 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:25.359 08:11:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.359 08:11:36 -- common/autotest_common.sh@10 -- # set +x 00:20:25.359 08:11:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.359 08:11:36 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:25.359 08:11:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.359 08:11:36 -- common/autotest_common.sh@10 -- # set +x 00:20:25.359 [2024-12-07 08:11:36.410782] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:25.359 [ 00:20:25.359 { 00:20:25.359 "allow_any_host": true, 00:20:25.359 "hosts": [], 00:20:25.359 "listen_addresses": [ 00:20:25.359 { 00:20:25.359 "adrfam": "IPv4", 00:20:25.359 "traddr": "10.0.0.2", 00:20:25.359 "transport": "TCP", 00:20:25.359 "trsvcid": "4420", 00:20:25.359 "trtype": "TCP" 00:20:25.359 } 00:20:25.359 ], 00:20:25.359 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:25.359 "subtype": "Discovery" 00:20:25.359 }, 00:20:25.359 { 00:20:25.359 "allow_any_host": true, 00:20:25.359 "hosts": [], 00:20:25.359 "listen_addresses": [ 00:20:25.359 { 00:20:25.359 "adrfam": "IPv4", 00:20:25.359 "traddr": "10.0.0.2", 00:20:25.359 "transport": "TCP", 00:20:25.359 "trsvcid": "4420", 00:20:25.359 "trtype": "TCP" 00:20:25.359 } 00:20:25.359 ], 00:20:25.359 "max_cntlid": 65519, 00:20:25.359 "max_namespaces": 32, 00:20:25.359 "min_cntlid": 1, 00:20:25.359 "model_number": "SPDK bdev Controller", 00:20:25.359 "namespaces": [ 00:20:25.359 { 00:20:25.359 "bdev_name": "Malloc0", 00:20:25.359 "eui64": "ABCDEF0123456789", 00:20:25.359 "name": "Malloc0", 00:20:25.359 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:25.359 "nsid": 1, 00:20:25.359 "uuid": "fa2c36e9-4509-4e79-a540-fc7a706b008d" 00:20:25.359 } 00:20:25.359 ], 00:20:25.359 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.359 "serial_number": "SPDK00000000000001", 00:20:25.359 "subtype": "NVMe" 00:20:25.359 } 00:20:25.359 ] 00:20:25.359 08:11:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.359 08:11:36 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:25.359 [2024-12-07 08:11:36.445060] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:25.359 [2024-12-07 08:11:36.445119] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93627 ] 00:20:25.359 [2024-12-07 08:11:36.577642] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:25.359 [2024-12-07 08:11:36.577718] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:25.359 [2024-12-07 08:11:36.577725] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:25.359 [2024-12-07 08:11:36.577737] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:25.359 [2024-12-07 08:11:36.577746] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:25.359 [2024-12-07 08:11:36.577873] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:25.359 [2024-12-07 08:11:36.577953] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x920510 0 00:20:25.359 [2024-12-07 08:11:36.582244] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:25.359 [2024-12-07 08:11:36.582266] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:25.359 [2024-12-07 08:11:36.582289] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:25.359 [2024-12-07 08:11:36.582292] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:25.359 [2024-12-07 08:11:36.582339] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.359 [2024-12-07 08:11:36.582346] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.359 [2024-12-07 08:11:36.582350] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x920510) 00:20:25.359 [2024-12-07 08:11:36.582363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:25.359 [2024-12-07 08:11:36.582394] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96c8a0, cid 0, qid 0 00:20:25.359 [2024-12-07 08:11:36.590249] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.359 [2024-12-07 08:11:36.590269] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.359 [2024-12-07 08:11:36.590289] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.359 [2024-12-07 08:11:36.590295] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96c8a0) on tqpair=0x920510 00:20:25.359 [2024-12-07 08:11:36.590308] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:25.359 [2024-12-07 08:11:36.590315] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:25.359 [2024-12-07 08:11:36.590321] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:25.359 [2024-12-07 08:11:36.590336] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.359 [2024-12-07 08:11:36.590342] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.359 [2024-12-07 08:11:36.590346] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x920510) 00:20:25.359 [2024-12-07 08:11:36.590355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.359 [2024-12-07 08:11:36.590383] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96c8a0, cid 0, qid 0 00:20:25.359 [2024-12-07 08:11:36.590457] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.359 [2024-12-07 08:11:36.590464] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.359 [2024-12-07 08:11:36.590468] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.359 [2024-12-07 08:11:36.590472] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96c8a0) on tqpair=0x920510 00:20:25.359 [2024-12-07 08:11:36.590477] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:25.359 [2024-12-07 08:11:36.590485] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:25.359 [2024-12-07 08:11:36.590492] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.359 [2024-12-07 08:11:36.590496] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.359 [2024-12-07 08:11:36.590500] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x920510) 00:20:25.359 [2024-12-07 08:11:36.590508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.359 [2024-12-07 08:11:36.590559] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96c8a0, cid 0, qid 0 00:20:25.359 [2024-12-07 08:11:36.590616] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.359 [2024-12-07 08:11:36.590623] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.359 [2024-12-07 08:11:36.590627] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.359 [2024-12-07 08:11:36.590631] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96c8a0) on tqpair=0x920510 00:20:25.359 [2024-12-07 08:11:36.590637] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:25.359 [2024-12-07 08:11:36.590646] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:25.359 [2024-12-07 08:11:36.590653] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.359 [2024-12-07 08:11:36.590657] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.359 [2024-12-07 08:11:36.590661] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x920510) 00:20:25.359 [2024-12-07 08:11:36.590669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.359 [2024-12-07 08:11:36.590687] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96c8a0, cid 0, qid 0 00:20:25.359 [2024-12-07 08:11:36.590744] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.359 [2024-12-07 08:11:36.590751] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.359 [2024-12-07 08:11:36.590754] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.359 [2024-12-07 08:11:36.590759] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96c8a0) on tqpair=0x920510 00:20:25.359 [2024-12-07 08:11:36.590765] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:25.359 [2024-12-07 08:11:36.590775] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.359 [2024-12-07 08:11:36.590780] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.359 [2024-12-07 08:11:36.590784] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x920510) 00:20:25.360 [2024-12-07 08:11:36.590791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.360 [2024-12-07 08:11:36.590809] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96c8a0, cid 0, qid 0 00:20:25.360 [2024-12-07 08:11:36.590870] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.360 [2024-12-07 08:11:36.590877] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.360 [2024-12-07 08:11:36.590881] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.590885] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96c8a0) on tqpair=0x920510 00:20:25.360 [2024-12-07 08:11:36.590890] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:25.360 [2024-12-07 08:11:36.590896] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:25.360 [2024-12-07 08:11:36.590904] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:25.360 [2024-12-07 08:11:36.591009] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:25.360 [2024-12-07 08:11:36.591014] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:25.360 [2024-12-07 08:11:36.591023] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591028] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591031] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x920510) 00:20:25.360 [2024-12-07 08:11:36.591039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.360 [2024-12-07 08:11:36.591058] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96c8a0, cid 0, qid 0 00:20:25.360 [2024-12-07 08:11:36.591117] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.360 [2024-12-07 08:11:36.591124] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.360 [2024-12-07 08:11:36.591127] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591132] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96c8a0) on tqpair=0x920510 00:20:25.360 [2024-12-07 08:11:36.591137] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:25.360 [2024-12-07 08:11:36.591147] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591152] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591156] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x920510) 00:20:25.360 [2024-12-07 08:11:36.591164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.360 [2024-12-07 08:11:36.591181] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96c8a0, cid 0, qid 0 00:20:25.360 [2024-12-07 08:11:36.591237] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.360 [2024-12-07 08:11:36.591256] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.360 [2024-12-07 08:11:36.591261] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591266] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96c8a0) on tqpair=0x920510 00:20:25.360 [2024-12-07 08:11:36.591271] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:25.360 [2024-12-07 08:11:36.591277] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:25.360 [2024-12-07 08:11:36.591286] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:25.360 [2024-12-07 08:11:36.591302] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:25.360 [2024-12-07 08:11:36.591312] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591316] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591320] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x920510) 00:20:25.360 [2024-12-07 08:11:36.591328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.360 [2024-12-07 08:11:36.591351] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96c8a0, cid 0, qid 0 00:20:25.360 [2024-12-07 08:11:36.591450] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:25.360 [2024-12-07 08:11:36.591458] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:25.360 [2024-12-07 08:11:36.591462] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591466] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x920510): datao=0, datal=4096, cccid=0 00:20:25.360 [2024-12-07 08:11:36.591471] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x96c8a0) on tqpair(0x920510): expected_datao=0, payload_size=4096 00:20:25.360 [2024-12-07 08:11:36.591480] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591485] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591494] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.360 [2024-12-07 08:11:36.591500] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.360 [2024-12-07 08:11:36.591504] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591508] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96c8a0) on tqpair=0x920510 00:20:25.360 [2024-12-07 08:11:36.591517] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:25.360 [2024-12-07 08:11:36.591522] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:25.360 [2024-12-07 08:11:36.591527] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:25.360 [2024-12-07 08:11:36.591532] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:25.360 [2024-12-07 08:11:36.591537] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:25.360 [2024-12-07 08:11:36.591542] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:25.360 [2024-12-07 08:11:36.591556] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:25.360 [2024-12-07 08:11:36.591564] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591568] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591572] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x920510) 00:20:25.360 [2024-12-07 08:11:36.591580] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:25.360 [2024-12-07 08:11:36.591602] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96c8a0, cid 0, qid 0 00:20:25.360 [2024-12-07 08:11:36.591669] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.360 [2024-12-07 08:11:36.591677] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.360 [2024-12-07 08:11:36.591680] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591684] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96c8a0) on tqpair=0x920510 00:20:25.360 [2024-12-07 08:11:36.591693] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591697] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591701] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x920510) 00:20:25.360 [2024-12-07 08:11:36.591708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.360 [2024-12-07 08:11:36.591714] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591718] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591722] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x920510) 00:20:25.360 [2024-12-07 08:11:36.591728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.360 [2024-12-07 08:11:36.591735] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591739] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591743] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x920510) 00:20:25.360 [2024-12-07 08:11:36.591749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.360 [2024-12-07 08:11:36.591755] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591759] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591762] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.360 [2024-12-07 08:11:36.591768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.360 [2024-12-07 08:11:36.591774] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:25.360 [2024-12-07 08:11:36.591788] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:25.360 [2024-12-07 08:11:36.591795] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591800] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.360 [2024-12-07 08:11:36.591804] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x920510) 00:20:25.360 [2024-12-07 08:11:36.591811] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.360 [2024-12-07 08:11:36.591832] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96c8a0, cid 0, qid 0 00:20:25.360 [2024-12-07 08:11:36.591840] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ca00, cid 1, qid 0 00:20:25.360 [2024-12-07 08:11:36.591845] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96cb60, cid 2, qid 0 00:20:25.360 [2024-12-07 08:11:36.591849] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.360 [2024-12-07 08:11:36.591854] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ce20, cid 4, qid 0 00:20:25.360 [2024-12-07 08:11:36.591952] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.360 [2024-12-07 08:11:36.591959] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.360 [2024-12-07 08:11:36.591963] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.361 [2024-12-07 08:11:36.591967] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ce20) on tqpair=0x920510 00:20:25.361 [2024-12-07 08:11:36.591973] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:25.361 [2024-12-07 08:11:36.591979] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:25.361 [2024-12-07 08:11:36.591990] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.361 [2024-12-07 08:11:36.591995] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.361 [2024-12-07 08:11:36.591999] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x920510) 00:20:25.361 [2024-12-07 08:11:36.592006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.361 [2024-12-07 08:11:36.592024] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ce20, cid 4, qid 0 00:20:25.361 [2024-12-07 08:11:36.592096] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:25.361 [2024-12-07 08:11:36.592103] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:25.361 [2024-12-07 08:11:36.592107] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:25.361 [2024-12-07 08:11:36.592111] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x920510): datao=0, datal=4096, cccid=4 00:20:25.361 [2024-12-07 08:11:36.592116] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x96ce20) on tqpair(0x920510): expected_datao=0, payload_size=4096 00:20:25.361 [2024-12-07 08:11:36.592124] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:25.361 [2024-12-07 08:11:36.592128] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:25.361 [2024-12-07 08:11:36.592137] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.361 [2024-12-07 08:11:36.592143] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.361 [2024-12-07 08:11:36.592147] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.361 [2024-12-07 08:11:36.592151] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ce20) on tqpair=0x920510 00:20:25.361 [2024-12-07 08:11:36.592164] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:25.361 [2024-12-07 08:11:36.592207] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.361 [2024-12-07 08:11:36.592216] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.361 [2024-12-07 08:11:36.592220] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x920510) 00:20:25.361 [2024-12-07 08:11:36.592228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.361 [2024-12-07 08:11:36.592236] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.361 [2024-12-07 08:11:36.592240] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.361 [2024-12-07 08:11:36.592243] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x920510) 00:20:25.361 [2024-12-07 08:11:36.592250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.361 [2024-12-07 08:11:36.592278] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ce20, cid 4, qid 0 00:20:25.361 [2024-12-07 08:11:36.592285] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96cf80, cid 5, qid 0 00:20:25.361 [2024-12-07 08:11:36.592391] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:25.361 [2024-12-07 08:11:36.592398] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:25.361 [2024-12-07 08:11:36.592402] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:25.361 [2024-12-07 08:11:36.592406] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x920510): datao=0, datal=1024, cccid=4 00:20:25.361 [2024-12-07 08:11:36.592411] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x96ce20) on tqpair(0x920510): expected_datao=0, payload_size=1024 00:20:25.361 [2024-12-07 08:11:36.592420] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:25.361 [2024-12-07 08:11:36.592424] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:25.361 [2024-12-07 08:11:36.592430] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.361 [2024-12-07 08:11:36.592436] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.361 [2024-12-07 08:11:36.592439] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.361 [2024-12-07 08:11:36.592444] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96cf80) on tqpair=0x920510 00:20:25.626 [2024-12-07 08:11:36.633315] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.626 [2024-12-07 08:11:36.633337] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.626 [2024-12-07 08:11:36.633342] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.626 [2024-12-07 08:11:36.633363] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ce20) on tqpair=0x920510 00:20:25.626 [2024-12-07 08:11:36.633377] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.626 [2024-12-07 08:11:36.633381] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.626 [2024-12-07 08:11:36.633385] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x920510) 00:20:25.627 [2024-12-07 08:11:36.633394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.627 [2024-12-07 08:11:36.633426] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ce20, cid 4, qid 0 00:20:25.627 [2024-12-07 08:11:36.633512] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:25.627 [2024-12-07 08:11:36.633519] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:25.627 [2024-12-07 08:11:36.633523] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:25.627 [2024-12-07 08:11:36.633527] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x920510): datao=0, datal=3072, cccid=4 00:20:25.627 [2024-12-07 08:11:36.633532] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x96ce20) on tqpair(0x920510): expected_datao=0, payload_size=3072 00:20:25.627 [2024-12-07 08:11:36.633540] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:25.627 [2024-12-07 08:11:36.633544] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:25.627 [2024-12-07 08:11:36.633567] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.627 [2024-12-07 08:11:36.633573] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.627 [2024-12-07 08:11:36.633576] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.627 [2024-12-07 08:11:36.633580] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ce20) on tqpair=0x920510 00:20:25.627 [2024-12-07 08:11:36.633589] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.627 [2024-12-07 08:11:36.633619] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.627 [2024-12-07 08:11:36.633639] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x920510) 00:20:25.627 [2024-12-07 08:11:36.633647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.627 [2024-12-07 08:11:36.633674] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ce20, cid 4, qid 0 00:20:25.627 [2024-12-07 08:11:36.633754] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:25.627 [2024-12-07 08:11:36.633761] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:25.627 [2024-12-07 08:11:36.633765] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:25.627 [2024-12-07 08:11:36.633769] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x920510): datao=0, datal=8, cccid=4 00:20:25.627 [2024-12-07 08:11:36.633773] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x96ce20) on tqpair(0x920510): expected_datao=0, payload_size=8 00:20:25.627 [2024-12-07 08:11:36.633781] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:25.627 [2024-12-07 08:11:36.633785] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:25.627 ===================================================== 00:20:25.627 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:25.627 ===================================================== 00:20:25.627 Controller Capabilities/Features 00:20:25.627 ================================ 00:20:25.627 Vendor ID: 0000 00:20:25.627 Subsystem Vendor ID: 0000 00:20:25.627 Serial Number: .................... 00:20:25.627 Model Number: ........................................ 00:20:25.627 Firmware Version: 24.01.1 00:20:25.627 Recommended Arb Burst: 0 00:20:25.627 IEEE OUI Identifier: 00 00 00 00:20:25.627 Multi-path I/O 00:20:25.627 May have multiple subsystem ports: No 00:20:25.627 May have multiple controllers: No 00:20:25.627 Associated with SR-IOV VF: No 00:20:25.627 Max Data Transfer Size: 131072 00:20:25.627 Max Number of Namespaces: 0 00:20:25.627 Max Number of I/O Queues: 1024 00:20:25.627 NVMe Specification Version (VS): 1.3 00:20:25.627 NVMe Specification Version (Identify): 1.3 00:20:25.627 Maximum Queue Entries: 128 00:20:25.627 Contiguous Queues Required: Yes 00:20:25.627 Arbitration Mechanisms Supported 00:20:25.627 Weighted Round Robin: Not Supported 00:20:25.627 Vendor Specific: Not Supported 00:20:25.627 Reset Timeout: 15000 ms 00:20:25.627 Doorbell Stride: 4 bytes 00:20:25.627 NVM Subsystem Reset: Not Supported 00:20:25.627 Command Sets Supported 00:20:25.627 NVM Command Set: Supported 00:20:25.627 Boot Partition: Not Supported 00:20:25.627 Memory Page Size Minimum: 4096 bytes 00:20:25.627 Memory Page Size Maximum: 4096 bytes 00:20:25.627 Persistent Memory Region: Not Supported 00:20:25.627 Optional Asynchronous Events Supported 00:20:25.627 Namespace Attribute Notices: Not Supported 00:20:25.627 Firmware Activation Notices: Not Supported 00:20:25.627 ANA Change Notices: Not Supported 00:20:25.627 PLE Aggregate Log Change Notices: Not Supported 00:20:25.627 LBA Status Info Alert Notices: Not Supported 00:20:25.627 EGE Aggregate Log Change Notices: Not Supported 00:20:25.627 Normal NVM Subsystem Shutdown event: Not Supported 00:20:25.627 Zone Descriptor Change Notices: Not Supported 00:20:25.627 Discovery Log Change Notices: Supported 00:20:25.627 Controller Attributes 00:20:25.627 128-bit Host Identifier: Not Supported 00:20:25.627 Non-Operational Permissive Mode: Not Supported 00:20:25.627 NVM Sets: Not Supported 00:20:25.627 Read Recovery Levels: Not Supported 00:20:25.627 Endurance Groups: Not Supported 00:20:25.627 Predictable Latency Mode: Not Supported 00:20:25.627 Traffic Based Keep ALive: Not Supported 00:20:25.627 Namespace Granularity: Not Supported 00:20:25.627 SQ Associations: Not Supported 00:20:25.627 UUID List: Not Supported 00:20:25.627 Multi-Domain Subsystem: Not Supported 00:20:25.627 Fixed Capacity Management: Not Supported 00:20:25.627 Variable Capacity Management: Not Supported 00:20:25.627 Delete Endurance Group: Not Supported 00:20:25.627 Delete NVM Set: Not Supported 00:20:25.627 Extended LBA Formats Supported: Not Supported 00:20:25.627 Flexible Data Placement Supported: Not Supported 00:20:25.627 00:20:25.627 Controller Memory Buffer Support 00:20:25.627 ================================ 00:20:25.627 Supported: No 00:20:25.627 00:20:25.627 Persistent Memory Region Support 00:20:25.627 ================================ 00:20:25.627 Supported: No 00:20:25.627 00:20:25.627 Admin Command Set Attributes 00:20:25.627 ============================ 00:20:25.627 Security Send/Receive: Not Supported 00:20:25.627 Format NVM: Not Supported 00:20:25.627 Firmware Activate/Download: Not Supported 00:20:25.627 Namespace Management: Not Supported 00:20:25.627 Device Self-Test: Not Supported 00:20:25.627 Directives: Not Supported 00:20:25.627 NVMe-MI: Not Supported 00:20:25.627 Virtualization Management: Not Supported 00:20:25.627 Doorbell Buffer Config: Not Supported 00:20:25.627 Get LBA Status Capability: Not Supported 00:20:25.627 Command & Feature Lockdown Capability: Not Supported 00:20:25.627 Abort Command Limit: 1 00:20:25.627 Async Event Request Limit: 4 00:20:25.627 Number of Firmware Slots: N/A 00:20:25.627 Firmware Slot 1 Read-Only: N/A 00:20:25.627 Fi[2024-12-07 08:11:36.678270] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.627 [2024-12-07 08:11:36.678291] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.627 [2024-12-07 08:11:36.678312] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.627 [2024-12-07 08:11:36.678316] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ce20) on tqpair=0x920510 00:20:25.627 rmware Activation Without Reset: N/A 00:20:25.627 Multiple Update Detection Support: N/A 00:20:25.627 Firmware Update Granularity: No Information Provided 00:20:25.627 Per-Namespace SMART Log: No 00:20:25.627 Asymmetric Namespace Access Log Page: Not Supported 00:20:25.627 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:25.627 Command Effects Log Page: Not Supported 00:20:25.627 Get Log Page Extended Data: Supported 00:20:25.627 Telemetry Log Pages: Not Supported 00:20:25.627 Persistent Event Log Pages: Not Supported 00:20:25.627 Supported Log Pages Log Page: May Support 00:20:25.627 Commands Supported & Effects Log Page: Not Supported 00:20:25.627 Feature Identifiers & Effects Log Page:May Support 00:20:25.627 NVMe-MI Commands & Effects Log Page: May Support 00:20:25.627 Data Area 4 for Telemetry Log: Not Supported 00:20:25.627 Error Log Page Entries Supported: 128 00:20:25.627 Keep Alive: Not Supported 00:20:25.627 00:20:25.627 NVM Command Set Attributes 00:20:25.627 ========================== 00:20:25.627 Submission Queue Entry Size 00:20:25.627 Max: 1 00:20:25.627 Min: 1 00:20:25.627 Completion Queue Entry Size 00:20:25.627 Max: 1 00:20:25.627 Min: 1 00:20:25.627 Number of Namespaces: 0 00:20:25.627 Compare Command: Not Supported 00:20:25.627 Write Uncorrectable Command: Not Supported 00:20:25.627 Dataset Management Command: Not Supported 00:20:25.627 Write Zeroes Command: Not Supported 00:20:25.627 Set Features Save Field: Not Supported 00:20:25.627 Reservations: Not Supported 00:20:25.627 Timestamp: Not Supported 00:20:25.627 Copy: Not Supported 00:20:25.627 Volatile Write Cache: Not Present 00:20:25.627 Atomic Write Unit (Normal): 1 00:20:25.627 Atomic Write Unit (PFail): 1 00:20:25.627 Atomic Compare & Write Unit: 1 00:20:25.627 Fused Compare & Write: Supported 00:20:25.627 Scatter-Gather List 00:20:25.627 SGL Command Set: Supported 00:20:25.627 SGL Keyed: Supported 00:20:25.627 SGL Bit Bucket Descriptor: Not Supported 00:20:25.627 SGL Metadata Pointer: Not Supported 00:20:25.627 Oversized SGL: Not Supported 00:20:25.627 SGL Metadata Address: Not Supported 00:20:25.627 SGL Offset: Supported 00:20:25.628 Transport SGL Data Block: Not Supported 00:20:25.628 Replay Protected Memory Block: Not Supported 00:20:25.628 00:20:25.628 Firmware Slot Information 00:20:25.628 ========================= 00:20:25.628 Active slot: 0 00:20:25.628 00:20:25.628 00:20:25.628 Error Log 00:20:25.628 ========= 00:20:25.628 00:20:25.628 Active Namespaces 00:20:25.628 ================= 00:20:25.628 Discovery Log Page 00:20:25.628 ================== 00:20:25.628 Generation Counter: 2 00:20:25.628 Number of Records: 2 00:20:25.628 Record Format: 0 00:20:25.628 00:20:25.628 Discovery Log Entry 0 00:20:25.628 ---------------------- 00:20:25.628 Transport Type: 3 (TCP) 00:20:25.628 Address Family: 1 (IPv4) 00:20:25.628 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:25.628 Entry Flags: 00:20:25.628 Duplicate Returned Information: 1 00:20:25.628 Explicit Persistent Connection Support for Discovery: 1 00:20:25.628 Transport Requirements: 00:20:25.628 Secure Channel: Not Required 00:20:25.628 Port ID: 0 (0x0000) 00:20:25.628 Controller ID: 65535 (0xffff) 00:20:25.628 Admin Max SQ Size: 128 00:20:25.628 Transport Service Identifier: 4420 00:20:25.628 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:25.628 Transport Address: 10.0.0.2 00:20:25.628 Discovery Log Entry 1 00:20:25.628 ---------------------- 00:20:25.628 Transport Type: 3 (TCP) 00:20:25.628 Address Family: 1 (IPv4) 00:20:25.628 Subsystem Type: 2 (NVM Subsystem) 00:20:25.628 Entry Flags: 00:20:25.628 Duplicate Returned Information: 0 00:20:25.628 Explicit Persistent Connection Support for Discovery: 0 00:20:25.628 Transport Requirements: 00:20:25.628 Secure Channel: Not Required 00:20:25.628 Port ID: 0 (0x0000) 00:20:25.628 Controller ID: 65535 (0xffff) 00:20:25.628 Admin Max SQ Size: 128 00:20:25.628 Transport Service Identifier: 4420 00:20:25.628 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:25.628 Transport Address: 10.0.0.2 [2024-12-07 08:11:36.678412] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:25.628 [2024-12-07 08:11:36.678429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.628 [2024-12-07 08:11:36.678437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.628 [2024-12-07 08:11:36.678443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.628 [2024-12-07 08:11:36.678449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.628 [2024-12-07 08:11:36.678458] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.678463] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.678466] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.628 [2024-12-07 08:11:36.678475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.628 [2024-12-07 08:11:36.678500] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.628 [2024-12-07 08:11:36.678577] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.628 [2024-12-07 08:11:36.678584] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.628 [2024-12-07 08:11:36.678588] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.678592] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.628 [2024-12-07 08:11:36.678601] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.678605] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.678609] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.628 [2024-12-07 08:11:36.678616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.628 [2024-12-07 08:11:36.678655] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.628 [2024-12-07 08:11:36.678728] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.628 [2024-12-07 08:11:36.678735] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.628 [2024-12-07 08:11:36.678739] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.678743] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.628 [2024-12-07 08:11:36.678748] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:25.628 [2024-12-07 08:11:36.678754] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:25.628 [2024-12-07 08:11:36.678764] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.678768] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.678772] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.628 [2024-12-07 08:11:36.678780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.628 [2024-12-07 08:11:36.678798] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.628 [2024-12-07 08:11:36.678858] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.628 [2024-12-07 08:11:36.678865] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.628 [2024-12-07 08:11:36.678869] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.678873] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.628 [2024-12-07 08:11:36.678885] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.678889] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.678893] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.628 [2024-12-07 08:11:36.678901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.628 [2024-12-07 08:11:36.678918] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.628 [2024-12-07 08:11:36.678975] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.628 [2024-12-07 08:11:36.678982] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.628 [2024-12-07 08:11:36.678985] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.678990] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.628 [2024-12-07 08:11:36.679000] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.679004] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.679008] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.628 [2024-12-07 08:11:36.679016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.628 [2024-12-07 08:11:36.679033] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.628 [2024-12-07 08:11:36.679090] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.628 [2024-12-07 08:11:36.679096] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.628 [2024-12-07 08:11:36.679100] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.679104] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.628 [2024-12-07 08:11:36.679115] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.679119] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.679123] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.628 [2024-12-07 08:11:36.679130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.628 [2024-12-07 08:11:36.679148] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.628 [2024-12-07 08:11:36.679207] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.628 [2024-12-07 08:11:36.679214] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.628 [2024-12-07 08:11:36.679218] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.679222] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.628 [2024-12-07 08:11:36.679232] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.679237] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.679254] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.628 [2024-12-07 08:11:36.679264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.628 [2024-12-07 08:11:36.679285] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.628 [2024-12-07 08:11:36.679346] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.628 [2024-12-07 08:11:36.679353] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.628 [2024-12-07 08:11:36.679356] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.679361] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.628 [2024-12-07 08:11:36.679372] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.679376] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.628 [2024-12-07 08:11:36.679380] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.628 [2024-12-07 08:11:36.679388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.628 [2024-12-07 08:11:36.679406] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.628 [2024-12-07 08:11:36.679460] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.629 [2024-12-07 08:11:36.679467] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.629 [2024-12-07 08:11:36.679470] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.679475] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.629 [2024-12-07 08:11:36.679485] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.679490] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.679493] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.629 [2024-12-07 08:11:36.679501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.629 [2024-12-07 08:11:36.679518] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.629 [2024-12-07 08:11:36.679575] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.629 [2024-12-07 08:11:36.679581] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.629 [2024-12-07 08:11:36.679585] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.679589] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.629 [2024-12-07 08:11:36.679600] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.679604] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.679608] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.629 [2024-12-07 08:11:36.679616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.629 [2024-12-07 08:11:36.679633] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.629 [2024-12-07 08:11:36.679690] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.629 [2024-12-07 08:11:36.679697] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.629 [2024-12-07 08:11:36.679700] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.679704] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.629 [2024-12-07 08:11:36.679715] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.679719] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.679723] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.629 [2024-12-07 08:11:36.679730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.629 [2024-12-07 08:11:36.679748] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.629 [2024-12-07 08:11:36.679807] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.629 [2024-12-07 08:11:36.679814] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.629 [2024-12-07 08:11:36.679817] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.679821] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.629 [2024-12-07 08:11:36.679832] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.679836] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.679840] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.629 [2024-12-07 08:11:36.679848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.629 [2024-12-07 08:11:36.679865] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.629 [2024-12-07 08:11:36.679919] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.629 [2024-12-07 08:11:36.679925] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.629 [2024-12-07 08:11:36.679929] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.679933] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.629 [2024-12-07 08:11:36.679944] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.679948] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.679952] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.629 [2024-12-07 08:11:36.679960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.629 [2024-12-07 08:11:36.679977] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.629 [2024-12-07 08:11:36.680034] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.629 [2024-12-07 08:11:36.680041] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.629 [2024-12-07 08:11:36.680044] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.680048] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.629 [2024-12-07 08:11:36.680059] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.680064] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.680068] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.629 [2024-12-07 08:11:36.680075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.629 [2024-12-07 08:11:36.680093] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.629 [2024-12-07 08:11:36.680149] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.629 [2024-12-07 08:11:36.680155] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.629 [2024-12-07 08:11:36.680159] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.680163] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.629 [2024-12-07 08:11:36.680174] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.680178] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.680182] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.629 [2024-12-07 08:11:36.680190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.629 [2024-12-07 08:11:36.680228] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.629 [2024-12-07 08:11:36.680288] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.629 [2024-12-07 08:11:36.680295] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.629 [2024-12-07 08:11:36.680299] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.680303] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.629 [2024-12-07 08:11:36.680314] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.680319] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.680323] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.629 [2024-12-07 08:11:36.680330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.629 [2024-12-07 08:11:36.680348] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.629 [2024-12-07 08:11:36.680406] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.629 [2024-12-07 08:11:36.680413] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.629 [2024-12-07 08:11:36.680416] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.680420] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.629 [2024-12-07 08:11:36.680431] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.680435] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.680439] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.629 [2024-12-07 08:11:36.680446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.629 [2024-12-07 08:11:36.680464] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.629 [2024-12-07 08:11:36.680523] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.629 [2024-12-07 08:11:36.680530] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.629 [2024-12-07 08:11:36.680533] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.680538] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.629 [2024-12-07 08:11:36.680548] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.680553] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.680556] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.629 [2024-12-07 08:11:36.680564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.629 [2024-12-07 08:11:36.680581] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.629 [2024-12-07 08:11:36.680635] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.629 [2024-12-07 08:11:36.680642] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.629 [2024-12-07 08:11:36.680646] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.680650] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.629 [2024-12-07 08:11:36.680660] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.680665] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.680668] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.629 [2024-12-07 08:11:36.680676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.629 [2024-12-07 08:11:36.680693] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.629 [2024-12-07 08:11:36.680749] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.629 [2024-12-07 08:11:36.680756] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.629 [2024-12-07 08:11:36.680760] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.680764] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.629 [2024-12-07 08:11:36.680774] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.680779] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.629 [2024-12-07 08:11:36.680782] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.629 [2024-12-07 08:11:36.680790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.629 [2024-12-07 08:11:36.680807] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.630 [2024-12-07 08:11:36.680860] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.630 [2024-12-07 08:11:36.680867] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.630 [2024-12-07 08:11:36.680871] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.680875] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.630 [2024-12-07 08:11:36.680885] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.680889] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.680893] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.630 [2024-12-07 08:11:36.680901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.630 [2024-12-07 08:11:36.680918] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.630 [2024-12-07 08:11:36.680971] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.630 [2024-12-07 08:11:36.680978] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.630 [2024-12-07 08:11:36.680981] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.680986] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.630 [2024-12-07 08:11:36.680996] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681000] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681004] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.630 [2024-12-07 08:11:36.681012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.630 [2024-12-07 08:11:36.681029] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.630 [2024-12-07 08:11:36.681082] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.630 [2024-12-07 08:11:36.681089] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.630 [2024-12-07 08:11:36.681093] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681097] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.630 [2024-12-07 08:11:36.681107] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681112] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681116] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.630 [2024-12-07 08:11:36.681123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.630 [2024-12-07 08:11:36.681140] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.630 [2024-12-07 08:11:36.681213] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.630 [2024-12-07 08:11:36.681222] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.630 [2024-12-07 08:11:36.681226] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681230] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.630 [2024-12-07 08:11:36.681242] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681246] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681250] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.630 [2024-12-07 08:11:36.681258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.630 [2024-12-07 08:11:36.681278] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.630 [2024-12-07 08:11:36.681333] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.630 [2024-12-07 08:11:36.681340] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.630 [2024-12-07 08:11:36.681343] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681347] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.630 [2024-12-07 08:11:36.681358] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681362] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681366] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.630 [2024-12-07 08:11:36.681374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.630 [2024-12-07 08:11:36.681391] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.630 [2024-12-07 08:11:36.681448] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.630 [2024-12-07 08:11:36.681454] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.630 [2024-12-07 08:11:36.681458] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681462] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.630 [2024-12-07 08:11:36.681473] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681477] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681481] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.630 [2024-12-07 08:11:36.681489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.630 [2024-12-07 08:11:36.681505] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.630 [2024-12-07 08:11:36.681562] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.630 [2024-12-07 08:11:36.681569] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.630 [2024-12-07 08:11:36.681573] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681577] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.630 [2024-12-07 08:11:36.681587] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681591] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681595] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.630 [2024-12-07 08:11:36.681603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.630 [2024-12-07 08:11:36.681630] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.630 [2024-12-07 08:11:36.681686] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.630 [2024-12-07 08:11:36.681693] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.630 [2024-12-07 08:11:36.681696] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681700] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.630 [2024-12-07 08:11:36.681711] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681716] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681720] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.630 [2024-12-07 08:11:36.681727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.630 [2024-12-07 08:11:36.681745] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.630 [2024-12-07 08:11:36.681801] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.630 [2024-12-07 08:11:36.681808] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.630 [2024-12-07 08:11:36.681811] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681816] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.630 [2024-12-07 08:11:36.681826] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681831] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681834] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.630 [2024-12-07 08:11:36.681842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.630 [2024-12-07 08:11:36.681859] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.630 [2024-12-07 08:11:36.681915] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.630 [2024-12-07 08:11:36.681922] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.630 [2024-12-07 08:11:36.681926] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681930] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.630 [2024-12-07 08:11:36.681941] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681945] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.681949] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.630 [2024-12-07 08:11:36.681957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.630 [2024-12-07 08:11:36.681974] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.630 [2024-12-07 08:11:36.682034] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.630 [2024-12-07 08:11:36.682040] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.630 [2024-12-07 08:11:36.682044] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.682048] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.630 [2024-12-07 08:11:36.682059] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.682063] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.682068] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.630 [2024-12-07 08:11:36.682075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.630 [2024-12-07 08:11:36.682093] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.630 [2024-12-07 08:11:36.682149] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.630 [2024-12-07 08:11:36.682156] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.630 [2024-12-07 08:11:36.682159] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.682164] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.630 [2024-12-07 08:11:36.682174] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.682179] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.630 [2024-12-07 08:11:36.682182] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x920510) 00:20:25.630 [2024-12-07 08:11:36.682190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.631 [2024-12-07 08:11:36.685269] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x96ccc0, cid 3, qid 0 00:20:25.631 [2024-12-07 08:11:36.685331] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.631 [2024-12-07 08:11:36.685339] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.631 [2024-12-07 08:11:36.685343] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.631 [2024-12-07 08:11:36.685348] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x96ccc0) on tqpair=0x920510 00:20:25.631 [2024-12-07 08:11:36.685357] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:20:25.631 00:20:25.631 08:11:36 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:25.631 [2024-12-07 08:11:36.723628] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:25.631 [2024-12-07 08:11:36.723686] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93630 ] 00:20:25.631 [2024-12-07 08:11:36.859793] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:25.631 [2024-12-07 08:11:36.859868] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:25.631 [2024-12-07 08:11:36.859875] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:25.631 [2024-12-07 08:11:36.859887] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:25.631 [2024-12-07 08:11:36.859896] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:25.631 [2024-12-07 08:11:36.860014] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:25.631 [2024-12-07 08:11:36.860065] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x148b510 0 00:20:25.631 [2024-12-07 08:11:36.864278] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:25.631 [2024-12-07 08:11:36.864302] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:25.631 [2024-12-07 08:11:36.864308] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:25.631 [2024-12-07 08:11:36.864312] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:25.631 [2024-12-07 08:11:36.864358] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.631 [2024-12-07 08:11:36.864365] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.631 [2024-12-07 08:11:36.864369] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x148b510) 00:20:25.631 [2024-12-07 08:11:36.864381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:25.631 [2024-12-07 08:11:36.864413] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d78a0, cid 0, qid 0 00:20:25.631 [2024-12-07 08:11:36.872241] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.631 [2024-12-07 08:11:36.872260] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.631 [2024-12-07 08:11:36.872265] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.631 [2024-12-07 08:11:36.872270] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d78a0) on tqpair=0x148b510 00:20:25.631 [2024-12-07 08:11:36.872286] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:25.631 [2024-12-07 08:11:36.872294] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:25.631 [2024-12-07 08:11:36.872300] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:25.631 [2024-12-07 08:11:36.872315] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.631 [2024-12-07 08:11:36.872321] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.631 [2024-12-07 08:11:36.872325] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x148b510) 00:20:25.631 [2024-12-07 08:11:36.872334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.631 [2024-12-07 08:11:36.872363] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d78a0, cid 0, qid 0 00:20:25.631 [2024-12-07 08:11:36.872435] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.631 [2024-12-07 08:11:36.872443] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.631 [2024-12-07 08:11:36.872447] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.631 [2024-12-07 08:11:36.872451] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d78a0) on tqpair=0x148b510 00:20:25.631 [2024-12-07 08:11:36.872458] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:25.631 [2024-12-07 08:11:36.872466] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:25.631 [2024-12-07 08:11:36.872474] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.631 [2024-12-07 08:11:36.872479] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.631 [2024-12-07 08:11:36.872483] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x148b510) 00:20:25.631 [2024-12-07 08:11:36.872490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.631 [2024-12-07 08:11:36.872510] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d78a0, cid 0, qid 0 00:20:25.631 [2024-12-07 08:11:36.872598] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.631 [2024-12-07 08:11:36.872605] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.631 [2024-12-07 08:11:36.872609] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.631 [2024-12-07 08:11:36.872613] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d78a0) on tqpair=0x148b510 00:20:25.631 [2024-12-07 08:11:36.872620] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:25.631 [2024-12-07 08:11:36.872628] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:25.631 [2024-12-07 08:11:36.872635] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.631 [2024-12-07 08:11:36.872639] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.631 [2024-12-07 08:11:36.872643] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x148b510) 00:20:25.631 [2024-12-07 08:11:36.872650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.631 [2024-12-07 08:11:36.872668] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d78a0, cid 0, qid 0 00:20:25.631 [2024-12-07 08:11:36.872719] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.631 [2024-12-07 08:11:36.872727] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.631 [2024-12-07 08:11:36.872731] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.631 [2024-12-07 08:11:36.872735] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d78a0) on tqpair=0x148b510 00:20:25.631 [2024-12-07 08:11:36.872742] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:25.631 [2024-12-07 08:11:36.872752] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.631 [2024-12-07 08:11:36.872756] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.631 [2024-12-07 08:11:36.872760] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x148b510) 00:20:25.631 [2024-12-07 08:11:36.872767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.631 [2024-12-07 08:11:36.872785] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d78a0, cid 0, qid 0 00:20:25.631 [2024-12-07 08:11:36.872843] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.631 [2024-12-07 08:11:36.872850] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.631 [2024-12-07 08:11:36.872854] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.631 [2024-12-07 08:11:36.872858] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d78a0) on tqpair=0x148b510 00:20:25.631 [2024-12-07 08:11:36.872863] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:25.631 [2024-12-07 08:11:36.872869] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:25.631 [2024-12-07 08:11:36.872877] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:25.631 [2024-12-07 08:11:36.872982] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:25.631 [2024-12-07 08:11:36.872986] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:25.631 [2024-12-07 08:11:36.872995] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.631 [2024-12-07 08:11:36.872999] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.631 [2024-12-07 08:11:36.873003] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x148b510) 00:20:25.632 [2024-12-07 08:11:36.873010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.632 [2024-12-07 08:11:36.873027] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d78a0, cid 0, qid 0 00:20:25.632 [2024-12-07 08:11:36.873085] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.632 [2024-12-07 08:11:36.873092] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.632 [2024-12-07 08:11:36.873096] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873100] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d78a0) on tqpair=0x148b510 00:20:25.632 [2024-12-07 08:11:36.873106] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:25.632 [2024-12-07 08:11:36.873115] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873120] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873124] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x148b510) 00:20:25.632 [2024-12-07 08:11:36.873131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.632 [2024-12-07 08:11:36.873148] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d78a0, cid 0, qid 0 00:20:25.632 [2024-12-07 08:11:36.873201] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.632 [2024-12-07 08:11:36.873224] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.632 [2024-12-07 08:11:36.873228] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873232] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d78a0) on tqpair=0x148b510 00:20:25.632 [2024-12-07 08:11:36.873252] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:25.632 [2024-12-07 08:11:36.873260] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:25.632 [2024-12-07 08:11:36.873270] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:25.632 [2024-12-07 08:11:36.873285] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:25.632 [2024-12-07 08:11:36.873296] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873301] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873305] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x148b510) 00:20:25.632 [2024-12-07 08:11:36.873313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.632 [2024-12-07 08:11:36.873335] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d78a0, cid 0, qid 0 00:20:25.632 [2024-12-07 08:11:36.873443] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:25.632 [2024-12-07 08:11:36.873451] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:25.632 [2024-12-07 08:11:36.873456] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873460] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x148b510): datao=0, datal=4096, cccid=0 00:20:25.632 [2024-12-07 08:11:36.873465] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14d78a0) on tqpair(0x148b510): expected_datao=0, payload_size=4096 00:20:25.632 [2024-12-07 08:11:36.873474] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873479] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873488] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.632 [2024-12-07 08:11:36.873494] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.632 [2024-12-07 08:11:36.873498] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873502] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d78a0) on tqpair=0x148b510 00:20:25.632 [2024-12-07 08:11:36.873511] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:25.632 [2024-12-07 08:11:36.873517] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:25.632 [2024-12-07 08:11:36.873522] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:25.632 [2024-12-07 08:11:36.873527] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:25.632 [2024-12-07 08:11:36.873532] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:25.632 [2024-12-07 08:11:36.873537] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:25.632 [2024-12-07 08:11:36.873551] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:25.632 [2024-12-07 08:11:36.873560] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873565] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873569] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x148b510) 00:20:25.632 [2024-12-07 08:11:36.873589] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:25.632 [2024-12-07 08:11:36.873619] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d78a0, cid 0, qid 0 00:20:25.632 [2024-12-07 08:11:36.873716] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.632 [2024-12-07 08:11:36.873724] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.632 [2024-12-07 08:11:36.873728] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873732] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d78a0) on tqpair=0x148b510 00:20:25.632 [2024-12-07 08:11:36.873741] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873746] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873750] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x148b510) 00:20:25.632 [2024-12-07 08:11:36.873760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.632 [2024-12-07 08:11:36.873767] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873771] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873775] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x148b510) 00:20:25.632 [2024-12-07 08:11:36.873781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.632 [2024-12-07 08:11:36.873788] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873792] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873796] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x148b510) 00:20:25.632 [2024-12-07 08:11:36.873802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.632 [2024-12-07 08:11:36.873808] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873812] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873816] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.632 [2024-12-07 08:11:36.873822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.632 [2024-12-07 08:11:36.873827] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:25.632 [2024-12-07 08:11:36.873841] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:25.632 [2024-12-07 08:11:36.873849] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873853] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.873857] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x148b510) 00:20:25.632 [2024-12-07 08:11:36.873864] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.632 [2024-12-07 08:11:36.873886] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d78a0, cid 0, qid 0 00:20:25.632 [2024-12-07 08:11:36.873894] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7a00, cid 1, qid 0 00:20:25.632 [2024-12-07 08:11:36.873899] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7b60, cid 2, qid 0 00:20:25.632 [2024-12-07 08:11:36.873905] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.632 [2024-12-07 08:11:36.873910] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7e20, cid 4, qid 0 00:20:25.632 [2024-12-07 08:11:36.874025] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.632 [2024-12-07 08:11:36.874033] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.632 [2024-12-07 08:11:36.874037] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.874041] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7e20) on tqpair=0x148b510 00:20:25.632 [2024-12-07 08:11:36.874047] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:25.632 [2024-12-07 08:11:36.874053] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:25.632 [2024-12-07 08:11:36.874061] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:25.632 [2024-12-07 08:11:36.874072] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:25.632 [2024-12-07 08:11:36.874080] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.874084] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.874088] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x148b510) 00:20:25.632 [2024-12-07 08:11:36.874096] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:25.632 [2024-12-07 08:11:36.874115] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7e20, cid 4, qid 0 00:20:25.632 [2024-12-07 08:11:36.874180] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.632 [2024-12-07 08:11:36.874187] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.632 [2024-12-07 08:11:36.874191] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.632 [2024-12-07 08:11:36.874195] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7e20) on tqpair=0x148b510 00:20:25.632 [2024-12-07 08:11:36.874286] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:25.632 [2024-12-07 08:11:36.874305] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:25.632 [2024-12-07 08:11:36.874314] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.874318] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.874322] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x148b510) 00:20:25.633 [2024-12-07 08:11:36.874330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.633 [2024-12-07 08:11:36.874351] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7e20, cid 4, qid 0 00:20:25.633 [2024-12-07 08:11:36.874430] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:25.633 [2024-12-07 08:11:36.874437] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:25.633 [2024-12-07 08:11:36.874441] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.874445] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x148b510): datao=0, datal=4096, cccid=4 00:20:25.633 [2024-12-07 08:11:36.874450] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14d7e20) on tqpair(0x148b510): expected_datao=0, payload_size=4096 00:20:25.633 [2024-12-07 08:11:36.874458] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.874462] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.874471] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.633 [2024-12-07 08:11:36.874477] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.633 [2024-12-07 08:11:36.874481] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.874485] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7e20) on tqpair=0x148b510 00:20:25.633 [2024-12-07 08:11:36.874501] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:25.633 [2024-12-07 08:11:36.874511] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:25.633 [2024-12-07 08:11:36.874522] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:25.633 [2024-12-07 08:11:36.874530] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.874534] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.874538] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x148b510) 00:20:25.633 [2024-12-07 08:11:36.874545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.633 [2024-12-07 08:11:36.874566] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7e20, cid 4, qid 0 00:20:25.633 [2024-12-07 08:11:36.874649] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:25.633 [2024-12-07 08:11:36.874656] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:25.633 [2024-12-07 08:11:36.874660] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.874664] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x148b510): datao=0, datal=4096, cccid=4 00:20:25.633 [2024-12-07 08:11:36.874669] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14d7e20) on tqpair(0x148b510): expected_datao=0, payload_size=4096 00:20:25.633 [2024-12-07 08:11:36.874677] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.874681] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.874689] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.633 [2024-12-07 08:11:36.874696] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.633 [2024-12-07 08:11:36.874700] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.874704] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7e20) on tqpair=0x148b510 00:20:25.633 [2024-12-07 08:11:36.874720] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:25.633 [2024-12-07 08:11:36.874731] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:25.633 [2024-12-07 08:11:36.874740] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.874744] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.874748] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x148b510) 00:20:25.633 [2024-12-07 08:11:36.874755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.633 [2024-12-07 08:11:36.874777] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7e20, cid 4, qid 0 00:20:25.633 [2024-12-07 08:11:36.874857] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:25.633 [2024-12-07 08:11:36.874871] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:25.633 [2024-12-07 08:11:36.874875] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.874880] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x148b510): datao=0, datal=4096, cccid=4 00:20:25.633 [2024-12-07 08:11:36.874885] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14d7e20) on tqpair(0x148b510): expected_datao=0, payload_size=4096 00:20:25.633 [2024-12-07 08:11:36.874893] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.874897] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.874906] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.633 [2024-12-07 08:11:36.874913] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.633 [2024-12-07 08:11:36.874916] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.874921] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7e20) on tqpair=0x148b510 00:20:25.633 [2024-12-07 08:11:36.874930] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:25.633 [2024-12-07 08:11:36.874939] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:25.633 [2024-12-07 08:11:36.874950] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:25.633 [2024-12-07 08:11:36.874962] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:25.633 [2024-12-07 08:11:36.874967] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:25.633 [2024-12-07 08:11:36.874973] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:25.633 [2024-12-07 08:11:36.874978] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:25.633 [2024-12-07 08:11:36.874983] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:25.633 [2024-12-07 08:11:36.875006] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.875011] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.875016] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x148b510) 00:20:25.633 [2024-12-07 08:11:36.875023] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.633 [2024-12-07 08:11:36.875030] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.875034] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.875038] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x148b510) 00:20:25.633 [2024-12-07 08:11:36.875044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:25.633 [2024-12-07 08:11:36.875071] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7e20, cid 4, qid 0 00:20:25.633 [2024-12-07 08:11:36.875079] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7f80, cid 5, qid 0 00:20:25.633 [2024-12-07 08:11:36.875160] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.633 [2024-12-07 08:11:36.875167] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.633 [2024-12-07 08:11:36.875171] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.875175] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7e20) on tqpair=0x148b510 00:20:25.633 [2024-12-07 08:11:36.875183] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.633 [2024-12-07 08:11:36.875189] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.633 [2024-12-07 08:11:36.875193] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.875212] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7f80) on tqpair=0x148b510 00:20:25.633 [2024-12-07 08:11:36.875226] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.875231] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.875235] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x148b510) 00:20:25.633 [2024-12-07 08:11:36.875242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.633 [2024-12-07 08:11:36.875263] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7f80, cid 5, qid 0 00:20:25.633 [2024-12-07 08:11:36.875327] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.633 [2024-12-07 08:11:36.875335] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.633 [2024-12-07 08:11:36.875339] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.875343] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7f80) on tqpair=0x148b510 00:20:25.633 [2024-12-07 08:11:36.875354] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.875359] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.875363] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x148b510) 00:20:25.633 [2024-12-07 08:11:36.875370] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.633 [2024-12-07 08:11:36.875387] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7f80, cid 5, qid 0 00:20:25.633 [2024-12-07 08:11:36.875445] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.633 [2024-12-07 08:11:36.875452] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.633 [2024-12-07 08:11:36.875456] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.875460] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7f80) on tqpair=0x148b510 00:20:25.633 [2024-12-07 08:11:36.875472] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.875476] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.633 [2024-12-07 08:11:36.875480] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x148b510) 00:20:25.633 [2024-12-07 08:11:36.875487] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.634 [2024-12-07 08:11:36.875504] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7f80, cid 5, qid 0 00:20:25.634 [2024-12-07 08:11:36.875563] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.634 [2024-12-07 08:11:36.875576] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.634 [2024-12-07 08:11:36.875580] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.875585] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7f80) on tqpair=0x148b510 00:20:25.634 [2024-12-07 08:11:36.875600] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.875605] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.875609] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x148b510) 00:20:25.634 [2024-12-07 08:11:36.875616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.634 [2024-12-07 08:11:36.875623] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.875627] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.875631] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x148b510) 00:20:25.634 [2024-12-07 08:11:36.875638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.634 [2024-12-07 08:11:36.875645] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.875649] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.875653] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x148b510) 00:20:25.634 [2024-12-07 08:11:36.875659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.634 [2024-12-07 08:11:36.875667] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.875671] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.875675] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x148b510) 00:20:25.634 [2024-12-07 08:11:36.875681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.634 [2024-12-07 08:11:36.875701] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7f80, cid 5, qid 0 00:20:25.634 [2024-12-07 08:11:36.875708] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7e20, cid 4, qid 0 00:20:25.634 [2024-12-07 08:11:36.875713] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d80e0, cid 6, qid 0 00:20:25.634 [2024-12-07 08:11:36.875718] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d8240, cid 7, qid 0 00:20:25.634 [2024-12-07 08:11:36.875862] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:25.634 [2024-12-07 08:11:36.875869] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:25.634 [2024-12-07 08:11:36.875874] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.875878] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x148b510): datao=0, datal=8192, cccid=5 00:20:25.634 [2024-12-07 08:11:36.875883] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14d7f80) on tqpair(0x148b510): expected_datao=0, payload_size=8192 00:20:25.634 [2024-12-07 08:11:36.875900] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.875906] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.875912] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:25.634 [2024-12-07 08:11:36.875918] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:25.634 [2024-12-07 08:11:36.875921] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.875925] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x148b510): datao=0, datal=512, cccid=4 00:20:25.634 [2024-12-07 08:11:36.875930] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14d7e20) on tqpair(0x148b510): expected_datao=0, payload_size=512 00:20:25.634 [2024-12-07 08:11:36.875937] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.875941] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.875947] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:25.634 [2024-12-07 08:11:36.875952] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:25.634 [2024-12-07 08:11:36.875956] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.875960] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x148b510): datao=0, datal=512, cccid=6 00:20:25.634 [2024-12-07 08:11:36.875964] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14d80e0) on tqpair(0x148b510): expected_datao=0, payload_size=512 00:20:25.634 [2024-12-07 08:11:36.875971] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.875975] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.875981] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:25.634 [2024-12-07 08:11:36.875987] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:25.634 [2024-12-07 08:11:36.875991] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.875995] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x148b510): datao=0, datal=4096, cccid=7 00:20:25.634 [2024-12-07 08:11:36.875999] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14d8240) on tqpair(0x148b510): expected_datao=0, payload_size=4096 00:20:25.634 [2024-12-07 08:11:36.876007] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.876011] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.876019] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.634 [2024-12-07 08:11:36.876025] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.634 [2024-12-07 08:11:36.876029] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.876033] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7f80) on tqpair=0x148b510 00:20:25.634 [2024-12-07 08:11:36.876051] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.634 [2024-12-07 08:11:36.876058] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.634 [2024-12-07 08:11:36.876061] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.634 [2024-12-07 08:11:36.876065] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7e20) on tqpair=0x148b510 00:20:25.634 [2024-12-07 08:11:36.876077] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.634 ===================================================== 00:20:25.634 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:25.634 ===================================================== 00:20:25.634 Controller Capabilities/Features 00:20:25.634 ================================ 00:20:25.634 Vendor ID: 8086 00:20:25.634 Subsystem Vendor ID: 8086 00:20:25.634 Serial Number: SPDK00000000000001 00:20:25.634 Model Number: SPDK bdev Controller 00:20:25.634 Firmware Version: 24.01.1 00:20:25.634 Recommended Arb Burst: 6 00:20:25.634 IEEE OUI Identifier: e4 d2 5c 00:20:25.634 Multi-path I/O 00:20:25.634 May have multiple subsystem ports: Yes 00:20:25.634 May have multiple controllers: Yes 00:20:25.634 Associated with SR-IOV VF: No 00:20:25.634 Max Data Transfer Size: 131072 00:20:25.634 Max Number of Namespaces: 32 00:20:25.634 Max Number of I/O Queues: 127 00:20:25.634 NVMe Specification Version (VS): 1.3 00:20:25.634 NVMe Specification Version (Identify): 1.3 00:20:25.634 Maximum Queue Entries: 128 00:20:25.634 Contiguous Queues Required: Yes 00:20:25.634 Arbitration Mechanisms Supported 00:20:25.634 Weighted Round Robin: Not Supported 00:20:25.634 Vendor Specific: Not Supported 00:20:25.634 Reset Timeout: 15000 ms 00:20:25.634 Doorbell Stride: 4 bytes 00:20:25.634 NVM Subsystem Reset: Not Supported 00:20:25.634 Command Sets Supported 00:20:25.634 NVM Command Set: Supported 00:20:25.634 Boot Partition: Not Supported 00:20:25.634 Memory Page Size Minimum: 4096 bytes 00:20:25.634 Memory Page Size Maximum: 4096 bytes 00:20:25.634 Persistent Memory Region: Not Supported 00:20:25.634 Optional Asynchronous Events Supported 00:20:25.634 Namespace Attribute Notices: Supported 00:20:25.634 Firmware Activation Notices: Not Supported 00:20:25.634 ANA Change Notices: Not Supported 00:20:25.634 PLE Aggregate Log Change Notices: Not Supported 00:20:25.634 LBA Status Info Alert Notices: Not Supported 00:20:25.634 EGE Aggregate Log Change Notices: Not Supported 00:20:25.634 Normal NVM Subsystem Shutdown event: Not Supported 00:20:25.634 Zone Descriptor Change Notices: Not Supported 00:20:25.634 Discovery Log Change Notices: Not Supported 00:20:25.634 Controller Attributes 00:20:25.634 128-bit Host Identifier: Supported 00:20:25.634 Non-Operational Permissive Mode: Not Supported 00:20:25.634 NVM Sets: Not Supported 00:20:25.634 Read Recovery Levels: Not Supported 00:20:25.634 Endurance Groups: Not Supported 00:20:25.634 Predictable Latency Mode: Not Supported 00:20:25.634 Traffic Based Keep ALive: Not Supported 00:20:25.634 Namespace Granularity: Not Supported 00:20:25.634 SQ Associations: Not Supported 00:20:25.634 UUID List: Not Supported 00:20:25.634 Multi-Domain Subsystem: Not Supported 00:20:25.634 Fixed Capacity Management: Not Supported 00:20:25.634 Variable Capacity Management: Not Supported 00:20:25.634 Delete Endurance Group: Not Supported 00:20:25.634 Delete NVM Set: Not Supported 00:20:25.634 Extended LBA Formats Supported: Not Supported 00:20:25.634 Flexible Data Placement Supported: Not Supported 00:20:25.634 00:20:25.634 Controller Memory Buffer Support 00:20:25.634 ================================ 00:20:25.634 Supported: No 00:20:25.634 00:20:25.634 Persistent Memory Region Support 00:20:25.634 ================================ 00:20:25.634 Supported: No 00:20:25.634 00:20:25.634 Admin Command Set Attributes 00:20:25.634 ============================ 00:20:25.634 Security Send/Receive: Not Supported 00:20:25.634 Format NVM: Not Supported 00:20:25.634 Firmware Activate/Download: Not Supported 00:20:25.634 Namespace Management: Not Supported 00:20:25.634 Device Self-Test: Not Supported 00:20:25.634 Directives: Not Supported 00:20:25.635 NVMe-MI: Not Supported 00:20:25.635 Virtualization Management: Not Supported 00:20:25.635 Doorbell Buffer Config: Not Supported 00:20:25.635 Get LBA Status Capability: Not Supported 00:20:25.635 Command & Feature Lockdown Capability: Not Supported 00:20:25.635 Abort Command Limit: 4 00:20:25.635 Async Event Request Limit: 4 00:20:25.635 Number of Firmware Slots: N/A 00:20:25.635 Firmware Slot 1 Read-Only: N/A 00:20:25.635 Firmware Activation Without Reset: [2024-12-07 08:11:36.876083] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.635 [2024-12-07 08:11:36.876087] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.635 [2024-12-07 08:11:36.876091] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d80e0) on tqpair=0x148b510 00:20:25.635 [2024-12-07 08:11:36.876100] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.635 [2024-12-07 08:11:36.876106] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.635 [2024-12-07 08:11:36.876109] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.635 [2024-12-07 08:11:36.876113] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d8240) on tqpair=0x148b510 00:20:25.635 N/A 00:20:25.635 Multiple Update Detection Support: N/A 00:20:25.635 Firmware Update Granularity: No Information Provided 00:20:25.635 Per-Namespace SMART Log: No 00:20:25.635 Asymmetric Namespace Access Log Page: Not Supported 00:20:25.635 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:25.635 Command Effects Log Page: Supported 00:20:25.635 Get Log Page Extended Data: Supported 00:20:25.635 Telemetry Log Pages: Not Supported 00:20:25.635 Persistent Event Log Pages: Not Supported 00:20:25.635 Supported Log Pages Log Page: May Support 00:20:25.635 Commands Supported & Effects Log Page: Not Supported 00:20:25.635 Feature Identifiers & Effects Log Page:May Support 00:20:25.635 NVMe-MI Commands & Effects Log Page: May Support 00:20:25.635 Data Area 4 for Telemetry Log: Not Supported 00:20:25.635 Error Log Page Entries Supported: 128 00:20:25.635 Keep Alive: Supported 00:20:25.635 Keep Alive Granularity: 10000 ms 00:20:25.635 00:20:25.635 NVM Command Set Attributes 00:20:25.635 ========================== 00:20:25.635 Submission Queue Entry Size 00:20:25.635 Max: 64 00:20:25.635 Min: 64 00:20:25.635 Completion Queue Entry Size 00:20:25.635 Max: 16 00:20:25.635 Min: 16 00:20:25.635 Number of Namespaces: 32 00:20:25.635 Compare Command: Supported 00:20:25.635 Write Uncorrectable Command: Not Supported 00:20:25.635 Dataset Management Command: Supported 00:20:25.635 Write Zeroes Command: Supported 00:20:25.635 Set Features Save Field: Not Supported 00:20:25.635 Reservations: Supported 00:20:25.635 Timestamp: Not Supported 00:20:25.635 Copy: Supported 00:20:25.635 Volatile Write Cache: Present 00:20:25.635 Atomic Write Unit (Normal): 1 00:20:25.635 Atomic Write Unit (PFail): 1 00:20:25.635 Atomic Compare & Write Unit: 1 00:20:25.635 Fused Compare & Write: Supported 00:20:25.635 Scatter-Gather List 00:20:25.635 SGL Command Set: Supported 00:20:25.635 SGL Keyed: Supported 00:20:25.635 SGL Bit Bucket Descriptor: Not Supported 00:20:25.635 SGL Metadata Pointer: Not Supported 00:20:25.635 Oversized SGL: Not Supported 00:20:25.635 SGL Metadata Address: Not Supported 00:20:25.635 SGL Offset: Supported 00:20:25.635 Transport SGL Data Block: Not Supported 00:20:25.635 Replay Protected Memory Block: Not Supported 00:20:25.635 00:20:25.635 Firmware Slot Information 00:20:25.635 ========================= 00:20:25.635 Active slot: 1 00:20:25.635 Slot 1 Firmware Revision: 24.01.1 00:20:25.635 00:20:25.635 00:20:25.635 Commands Supported and Effects 00:20:25.635 ============================== 00:20:25.635 Admin Commands 00:20:25.635 -------------- 00:20:25.635 Get Log Page (02h): Supported 00:20:25.635 Identify (06h): Supported 00:20:25.635 Abort (08h): Supported 00:20:25.635 Set Features (09h): Supported 00:20:25.635 Get Features (0Ah): Supported 00:20:25.635 Asynchronous Event Request (0Ch): Supported 00:20:25.635 Keep Alive (18h): Supported 00:20:25.635 I/O Commands 00:20:25.635 ------------ 00:20:25.635 Flush (00h): Supported LBA-Change 00:20:25.635 Write (01h): Supported LBA-Change 00:20:25.635 Read (02h): Supported 00:20:25.635 Compare (05h): Supported 00:20:25.635 Write Zeroes (08h): Supported LBA-Change 00:20:25.635 Dataset Management (09h): Supported LBA-Change 00:20:25.635 Copy (19h): Supported LBA-Change 00:20:25.635 Unknown (79h): Supported LBA-Change 00:20:25.635 Unknown (7Ah): Supported 00:20:25.635 00:20:25.635 Error Log 00:20:25.635 ========= 00:20:25.635 00:20:25.635 Arbitration 00:20:25.635 =========== 00:20:25.635 Arbitration Burst: 1 00:20:25.635 00:20:25.635 Power Management 00:20:25.635 ================ 00:20:25.635 Number of Power States: 1 00:20:25.635 Current Power State: Power State #0 00:20:25.635 Power State #0: 00:20:25.635 Max Power: 0.00 W 00:20:25.635 Non-Operational State: Operational 00:20:25.635 Entry Latency: Not Reported 00:20:25.635 Exit Latency: Not Reported 00:20:25.635 Relative Read Throughput: 0 00:20:25.635 Relative Read Latency: 0 00:20:25.635 Relative Write Throughput: 0 00:20:25.635 Relative Write Latency: 0 00:20:25.635 Idle Power: Not Reported 00:20:25.635 Active Power: Not Reported 00:20:25.635 Non-Operational Permissive Mode: Not Supported 00:20:25.635 00:20:25.635 Health Information 00:20:25.635 ================== 00:20:25.635 Critical Warnings: 00:20:25.635 Available Spare Space: OK 00:20:25.635 Temperature: OK 00:20:25.635 Device Reliability: OK 00:20:25.635 Read Only: No 00:20:25.635 Volatile Memory Backup: OK 00:20:25.635 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:25.635 Temperature Threshold: [2024-12-07 08:11:36.879290] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.635 [2024-12-07 08:11:36.879301] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.635 [2024-12-07 08:11:36.879305] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x148b510) 00:20:25.635 [2024-12-07 08:11:36.879314] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.635 [2024-12-07 08:11:36.879343] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d8240, cid 7, qid 0 00:20:25.635 [2024-12-07 08:11:36.879415] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.635 [2024-12-07 08:11:36.879423] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.635 [2024-12-07 08:11:36.879427] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.635 [2024-12-07 08:11:36.879431] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d8240) on tqpair=0x148b510 00:20:25.635 [2024-12-07 08:11:36.879469] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:25.635 [2024-12-07 08:11:36.879483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.635 [2024-12-07 08:11:36.879490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.635 [2024-12-07 08:11:36.879496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.635 [2024-12-07 08:11:36.879503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:25.635 [2024-12-07 08:11:36.879512] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.635 [2024-12-07 08:11:36.879516] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.635 [2024-12-07 08:11:36.879520] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.635 [2024-12-07 08:11:36.879528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.635 [2024-12-07 08:11:36.879551] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.635 [2024-12-07 08:11:36.879613] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.635 [2024-12-07 08:11:36.879620] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.635 [2024-12-07 08:11:36.879624] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.635 [2024-12-07 08:11:36.879629] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.635 [2024-12-07 08:11:36.879638] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.635 [2024-12-07 08:11:36.879642] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.635 [2024-12-07 08:11:36.879646] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.635 [2024-12-07 08:11:36.879653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.635 [2024-12-07 08:11:36.879675] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.635 [2024-12-07 08:11:36.879751] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.635 [2024-12-07 08:11:36.879758] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.635 [2024-12-07 08:11:36.879762] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.635 [2024-12-07 08:11:36.879766] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.635 [2024-12-07 08:11:36.879772] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:25.635 [2024-12-07 08:11:36.879777] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:25.635 [2024-12-07 08:11:36.879787] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.635 [2024-12-07 08:11:36.879792] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.879796] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.636 [2024-12-07 08:11:36.879803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.636 [2024-12-07 08:11:36.879821] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.636 [2024-12-07 08:11:36.879883] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.636 [2024-12-07 08:11:36.879890] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.636 [2024-12-07 08:11:36.879894] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.879898] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.636 [2024-12-07 08:11:36.879909] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.879914] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.879918] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.636 [2024-12-07 08:11:36.879925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.636 [2024-12-07 08:11:36.879943] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.636 [2024-12-07 08:11:36.879997] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.636 [2024-12-07 08:11:36.880004] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.636 [2024-12-07 08:11:36.880008] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880012] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.636 [2024-12-07 08:11:36.880023] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880028] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880032] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.636 [2024-12-07 08:11:36.880039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.636 [2024-12-07 08:11:36.880056] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.636 [2024-12-07 08:11:36.880113] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.636 [2024-12-07 08:11:36.880120] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.636 [2024-12-07 08:11:36.880124] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880128] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.636 [2024-12-07 08:11:36.880139] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880144] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880148] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.636 [2024-12-07 08:11:36.880155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.636 [2024-12-07 08:11:36.880172] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.636 [2024-12-07 08:11:36.880242] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.636 [2024-12-07 08:11:36.880251] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.636 [2024-12-07 08:11:36.880255] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880259] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.636 [2024-12-07 08:11:36.880271] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880276] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880280] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.636 [2024-12-07 08:11:36.880287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.636 [2024-12-07 08:11:36.880307] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.636 [2024-12-07 08:11:36.880367] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.636 [2024-12-07 08:11:36.880374] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.636 [2024-12-07 08:11:36.880378] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880382] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.636 [2024-12-07 08:11:36.880393] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880398] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880402] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.636 [2024-12-07 08:11:36.880409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.636 [2024-12-07 08:11:36.880427] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.636 [2024-12-07 08:11:36.880483] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.636 [2024-12-07 08:11:36.880490] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.636 [2024-12-07 08:11:36.880494] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880498] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.636 [2024-12-07 08:11:36.880509] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880514] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880518] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.636 [2024-12-07 08:11:36.880525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.636 [2024-12-07 08:11:36.880542] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.636 [2024-12-07 08:11:36.880596] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.636 [2024-12-07 08:11:36.880603] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.636 [2024-12-07 08:11:36.880606] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880611] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.636 [2024-12-07 08:11:36.880622] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880627] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880630] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.636 [2024-12-07 08:11:36.880638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.636 [2024-12-07 08:11:36.880655] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.636 [2024-12-07 08:11:36.880709] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.636 [2024-12-07 08:11:36.880716] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.636 [2024-12-07 08:11:36.880719] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880723] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.636 [2024-12-07 08:11:36.880734] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880748] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880752] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.636 [2024-12-07 08:11:36.880760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.636 [2024-12-07 08:11:36.880777] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.636 [2024-12-07 08:11:36.880838] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.636 [2024-12-07 08:11:36.880844] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.636 [2024-12-07 08:11:36.880848] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880852] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.636 [2024-12-07 08:11:36.880863] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880868] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880872] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.636 [2024-12-07 08:11:36.880879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.636 [2024-12-07 08:11:36.880897] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.636 [2024-12-07 08:11:36.880950] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.636 [2024-12-07 08:11:36.880966] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.636 [2024-12-07 08:11:36.880971] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880976] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.636 [2024-12-07 08:11:36.880988] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880993] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.636 [2024-12-07 08:11:36.880997] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.636 [2024-12-07 08:11:36.881004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.636 [2024-12-07 08:11:36.881023] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.636 [2024-12-07 08:11:36.881079] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.636 [2024-12-07 08:11:36.881093] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.637 [2024-12-07 08:11:36.881098] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881102] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.637 [2024-12-07 08:11:36.881114] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881119] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881122] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.637 [2024-12-07 08:11:36.881130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.637 [2024-12-07 08:11:36.881149] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.637 [2024-12-07 08:11:36.881217] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.637 [2024-12-07 08:11:36.881235] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.637 [2024-12-07 08:11:36.881240] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881244] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.637 [2024-12-07 08:11:36.881257] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881262] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881265] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.637 [2024-12-07 08:11:36.881273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.637 [2024-12-07 08:11:36.881294] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.637 [2024-12-07 08:11:36.881355] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.637 [2024-12-07 08:11:36.881361] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.637 [2024-12-07 08:11:36.881365] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881369] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.637 [2024-12-07 08:11:36.881381] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881385] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881389] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.637 [2024-12-07 08:11:36.881397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.637 [2024-12-07 08:11:36.881414] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.637 [2024-12-07 08:11:36.881468] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.637 [2024-12-07 08:11:36.881479] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.637 [2024-12-07 08:11:36.881484] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881488] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.637 [2024-12-07 08:11:36.881499] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881504] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881508] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.637 [2024-12-07 08:11:36.881515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.637 [2024-12-07 08:11:36.881534] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.637 [2024-12-07 08:11:36.881588] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.637 [2024-12-07 08:11:36.881599] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.637 [2024-12-07 08:11:36.881604] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881616] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.637 [2024-12-07 08:11:36.881629] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881634] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881638] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.637 [2024-12-07 08:11:36.881645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.637 [2024-12-07 08:11:36.881665] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.637 [2024-12-07 08:11:36.881726] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.637 [2024-12-07 08:11:36.881737] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.637 [2024-12-07 08:11:36.881741] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881746] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.637 [2024-12-07 08:11:36.881757] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881762] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881766] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.637 [2024-12-07 08:11:36.881774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.637 [2024-12-07 08:11:36.881792] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.637 [2024-12-07 08:11:36.881850] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.637 [2024-12-07 08:11:36.881860] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.637 [2024-12-07 08:11:36.881865] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881869] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.637 [2024-12-07 08:11:36.881880] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881885] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881889] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.637 [2024-12-07 08:11:36.881897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.637 [2024-12-07 08:11:36.881915] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.637 [2024-12-07 08:11:36.881973] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.637 [2024-12-07 08:11:36.881980] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.637 [2024-12-07 08:11:36.881984] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.881988] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.637 [2024-12-07 08:11:36.882000] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.882004] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.882008] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.637 [2024-12-07 08:11:36.882016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.637 [2024-12-07 08:11:36.882033] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.637 [2024-12-07 08:11:36.882090] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.637 [2024-12-07 08:11:36.882097] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.637 [2024-12-07 08:11:36.882101] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.882105] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.637 [2024-12-07 08:11:36.882116] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.882121] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.882124] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.637 [2024-12-07 08:11:36.882132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.637 [2024-12-07 08:11:36.882149] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.637 [2024-12-07 08:11:36.882224] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.637 [2024-12-07 08:11:36.882235] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.637 [2024-12-07 08:11:36.882240] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.882244] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.637 [2024-12-07 08:11:36.882256] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.882261] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.882265] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.637 [2024-12-07 08:11:36.882272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.637 [2024-12-07 08:11:36.882293] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.637 [2024-12-07 08:11:36.882354] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.637 [2024-12-07 08:11:36.882365] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.637 [2024-12-07 08:11:36.882370] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.882374] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.637 [2024-12-07 08:11:36.882385] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.882390] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.882394] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.637 [2024-12-07 08:11:36.882401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.637 [2024-12-07 08:11:36.882420] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.637 [2024-12-07 08:11:36.882474] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.637 [2024-12-07 08:11:36.882481] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.637 [2024-12-07 08:11:36.882485] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.882489] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.637 [2024-12-07 08:11:36.882500] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.882505] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.637 [2024-12-07 08:11:36.882509] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.637 [2024-12-07 08:11:36.882516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.637 [2024-12-07 08:11:36.882533] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.637 [2024-12-07 08:11:36.882587] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.638 [2024-12-07 08:11:36.882594] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.638 [2024-12-07 08:11:36.882598] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.638 [2024-12-07 08:11:36.882602] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.638 [2024-12-07 08:11:36.882613] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.638 [2024-12-07 08:11:36.882618] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.638 [2024-12-07 08:11:36.882622] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.638 [2024-12-07 08:11:36.882629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.638 [2024-12-07 08:11:36.882646] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.638 [2024-12-07 08:11:36.882703] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.638 [2024-12-07 08:11:36.882714] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.638 [2024-12-07 08:11:36.882718] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.638 [2024-12-07 08:11:36.882723] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.638 [2024-12-07 08:11:36.882734] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.638 [2024-12-07 08:11:36.882739] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.638 [2024-12-07 08:11:36.882742] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.638 [2024-12-07 08:11:36.882750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.638 [2024-12-07 08:11:36.882768] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.638 [2024-12-07 08:11:36.882828] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.638 [2024-12-07 08:11:36.882839] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.638 [2024-12-07 08:11:36.882844] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.638 [2024-12-07 08:11:36.882848] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.638 [2024-12-07 08:11:36.882860] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.638 [2024-12-07 08:11:36.882865] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.638 [2024-12-07 08:11:36.882869] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.638 [2024-12-07 08:11:36.882876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.638 [2024-12-07 08:11:36.882894] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.638 [2024-12-07 08:11:36.882957] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.638 [2024-12-07 08:11:36.882964] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.638 [2024-12-07 08:11:36.882968] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.638 [2024-12-07 08:11:36.882972] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.638 [2024-12-07 08:11:36.882983] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.638 [2024-12-07 08:11:36.882988] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.638 [2024-12-07 08:11:36.882992] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.638 [2024-12-07 08:11:36.882999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.638 [2024-12-07 08:11:36.883017] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.638 [2024-12-07 08:11:36.883069] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.638 [2024-12-07 08:11:36.883080] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.638 [2024-12-07 08:11:36.883085] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.638 [2024-12-07 08:11:36.883089] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.638 [2024-12-07 08:11:36.883101] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.638 [2024-12-07 08:11:36.883105] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.638 [2024-12-07 08:11:36.883109] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.638 [2024-12-07 08:11:36.883117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.638 [2024-12-07 08:11:36.883135] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.638 [2024-12-07 08:11:36.883192] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.638 [2024-12-07 08:11:36.886262] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.638 [2024-12-07 08:11:36.886269] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.638 [2024-12-07 08:11:36.886273] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.638 [2024-12-07 08:11:36.886289] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:25.638 [2024-12-07 08:11:36.886294] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:25.638 [2024-12-07 08:11:36.886298] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x148b510) 00:20:25.638 [2024-12-07 08:11:36.886306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:25.638 [2024-12-07 08:11:36.886331] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14d7cc0, cid 3, qid 0 00:20:25.638 [2024-12-07 08:11:36.886395] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:25.638 [2024-12-07 08:11:36.886402] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:25.638 [2024-12-07 08:11:36.886406] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:25.638 [2024-12-07 08:11:36.886410] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14d7cc0) on tqpair=0x148b510 00:20:25.638 [2024-12-07 08:11:36.886419] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:20:25.897 0 Kelvin (-273 Celsius) 00:20:25.897 Available Spare: 0% 00:20:25.897 Available Spare Threshold: 0% 00:20:25.897 Life Percentage Used: 0% 00:20:25.897 Data Units Read: 0 00:20:25.897 Data Units Written: 0 00:20:25.897 Host Read Commands: 0 00:20:25.897 Host Write Commands: 0 00:20:25.897 Controller Busy Time: 0 minutes 00:20:25.897 Power Cycles: 0 00:20:25.897 Power On Hours: 0 hours 00:20:25.897 Unsafe Shutdowns: 0 00:20:25.897 Unrecoverable Media Errors: 0 00:20:25.897 Lifetime Error Log Entries: 0 00:20:25.897 Warning Temperature Time: 0 minutes 00:20:25.897 Critical Temperature Time: 0 minutes 00:20:25.897 00:20:25.897 Number of Queues 00:20:25.897 ================ 00:20:25.897 Number of I/O Submission Queues: 127 00:20:25.897 Number of I/O Completion Queues: 127 00:20:25.897 00:20:25.897 Active Namespaces 00:20:25.897 ================= 00:20:25.897 Namespace ID:1 00:20:25.897 Error Recovery Timeout: Unlimited 00:20:25.897 Command Set Identifier: NVM (00h) 00:20:25.897 Deallocate: Supported 00:20:25.897 Deallocated/Unwritten Error: Not Supported 00:20:25.897 Deallocated Read Value: Unknown 00:20:25.897 Deallocate in Write Zeroes: Not Supported 00:20:25.897 Deallocated Guard Field: 0xFFFF 00:20:25.897 Flush: Supported 00:20:25.897 Reservation: Supported 00:20:25.897 Namespace Sharing Capabilities: Multiple Controllers 00:20:25.897 Size (in LBAs): 131072 (0GiB) 00:20:25.897 Capacity (in LBAs): 131072 (0GiB) 00:20:25.897 Utilization (in LBAs): 131072 (0GiB) 00:20:25.897 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:25.897 EUI64: ABCDEF0123456789 00:20:25.897 UUID: fa2c36e9-4509-4e79-a540-fc7a706b008d 00:20:25.897 Thin Provisioning: Not Supported 00:20:25.897 Per-NS Atomic Units: Yes 00:20:25.897 Atomic Boundary Size (Normal): 0 00:20:25.897 Atomic Boundary Size (PFail): 0 00:20:25.897 Atomic Boundary Offset: 0 00:20:25.897 Maximum Single Source Range Length: 65535 00:20:25.897 Maximum Copy Length: 65535 00:20:25.897 Maximum Source Range Count: 1 00:20:25.897 NGUID/EUI64 Never Reused: No 00:20:25.897 Namespace Write Protected: No 00:20:25.897 Number of LBA Formats: 1 00:20:25.897 Current LBA Format: LBA Format #00 00:20:25.897 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:25.897 00:20:25.897 08:11:36 -- host/identify.sh@51 -- # sync 00:20:25.897 08:11:36 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:25.897 08:11:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.897 08:11:36 -- common/autotest_common.sh@10 -- # set +x 00:20:25.897 08:11:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.897 08:11:36 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:25.897 08:11:36 -- host/identify.sh@56 -- # nvmftestfini 00:20:25.897 08:11:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:25.897 08:11:36 -- nvmf/common.sh@116 -- # sync 00:20:25.897 08:11:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:25.897 08:11:36 -- nvmf/common.sh@119 -- # set +e 00:20:25.897 08:11:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:25.897 08:11:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:25.897 rmmod nvme_tcp 00:20:25.897 rmmod nvme_fabrics 00:20:25.897 rmmod nvme_keyring 00:20:25.897 08:11:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:25.897 08:11:37 -- nvmf/common.sh@123 -- # set -e 00:20:25.897 08:11:37 -- nvmf/common.sh@124 -- # return 0 00:20:25.897 08:11:37 -- nvmf/common.sh@477 -- # '[' -n 93574 ']' 00:20:25.897 08:11:37 -- nvmf/common.sh@478 -- # killprocess 93574 00:20:25.897 08:11:37 -- common/autotest_common.sh@936 -- # '[' -z 93574 ']' 00:20:25.897 08:11:37 -- common/autotest_common.sh@940 -- # kill -0 93574 00:20:25.897 08:11:37 -- common/autotest_common.sh@941 -- # uname 00:20:25.897 08:11:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:25.897 08:11:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93574 00:20:25.897 08:11:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:25.898 08:11:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:25.898 killing process with pid 93574 00:20:25.898 08:11:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93574' 00:20:25.898 08:11:37 -- common/autotest_common.sh@955 -- # kill 93574 00:20:25.898 [2024-12-07 08:11:37.040979] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:25.898 08:11:37 -- common/autotest_common.sh@960 -- # wait 93574 00:20:26.157 08:11:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:26.157 08:11:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:26.157 08:11:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:26.157 08:11:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:26.157 08:11:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:26.157 08:11:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.157 08:11:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.157 08:11:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.157 08:11:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:26.157 ************************************ 00:20:26.157 END TEST nvmf_identify 00:20:26.157 ************************************ 00:20:26.157 00:20:26.157 real 0m2.694s 00:20:26.157 user 0m7.680s 00:20:26.157 sys 0m0.654s 00:20:26.157 08:11:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:26.157 08:11:37 -- common/autotest_common.sh@10 -- # set +x 00:20:26.157 08:11:37 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:26.157 08:11:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:26.157 08:11:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:26.157 08:11:37 -- common/autotest_common.sh@10 -- # set +x 00:20:26.157 ************************************ 00:20:26.157 START TEST nvmf_perf 00:20:26.157 ************************************ 00:20:26.157 08:11:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:26.416 * Looking for test storage... 00:20:26.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:26.417 08:11:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:26.417 08:11:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:26.417 08:11:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:26.417 08:11:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:26.417 08:11:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:26.417 08:11:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:26.417 08:11:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:26.417 08:11:37 -- scripts/common.sh@335 -- # IFS=.-: 00:20:26.417 08:11:37 -- scripts/common.sh@335 -- # read -ra ver1 00:20:26.417 08:11:37 -- scripts/common.sh@336 -- # IFS=.-: 00:20:26.417 08:11:37 -- scripts/common.sh@336 -- # read -ra ver2 00:20:26.417 08:11:37 -- scripts/common.sh@337 -- # local 'op=<' 00:20:26.417 08:11:37 -- scripts/common.sh@339 -- # ver1_l=2 00:20:26.417 08:11:37 -- scripts/common.sh@340 -- # ver2_l=1 00:20:26.417 08:11:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:26.417 08:11:37 -- scripts/common.sh@343 -- # case "$op" in 00:20:26.417 08:11:37 -- scripts/common.sh@344 -- # : 1 00:20:26.417 08:11:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:26.417 08:11:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:26.417 08:11:37 -- scripts/common.sh@364 -- # decimal 1 00:20:26.417 08:11:37 -- scripts/common.sh@352 -- # local d=1 00:20:26.417 08:11:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:26.417 08:11:37 -- scripts/common.sh@354 -- # echo 1 00:20:26.417 08:11:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:26.417 08:11:37 -- scripts/common.sh@365 -- # decimal 2 00:20:26.417 08:11:37 -- scripts/common.sh@352 -- # local d=2 00:20:26.417 08:11:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:26.417 08:11:37 -- scripts/common.sh@354 -- # echo 2 00:20:26.417 08:11:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:26.417 08:11:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:26.417 08:11:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:26.417 08:11:37 -- scripts/common.sh@367 -- # return 0 00:20:26.417 08:11:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:26.417 08:11:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:26.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.417 --rc genhtml_branch_coverage=1 00:20:26.417 --rc genhtml_function_coverage=1 00:20:26.417 --rc genhtml_legend=1 00:20:26.417 --rc geninfo_all_blocks=1 00:20:26.417 --rc geninfo_unexecuted_blocks=1 00:20:26.417 00:20:26.417 ' 00:20:26.417 08:11:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:26.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.417 --rc genhtml_branch_coverage=1 00:20:26.417 --rc genhtml_function_coverage=1 00:20:26.417 --rc genhtml_legend=1 00:20:26.417 --rc geninfo_all_blocks=1 00:20:26.417 --rc geninfo_unexecuted_blocks=1 00:20:26.417 00:20:26.417 ' 00:20:26.417 08:11:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:26.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.417 --rc genhtml_branch_coverage=1 00:20:26.417 --rc genhtml_function_coverage=1 00:20:26.417 --rc genhtml_legend=1 00:20:26.417 --rc geninfo_all_blocks=1 00:20:26.417 --rc geninfo_unexecuted_blocks=1 00:20:26.417 00:20:26.417 ' 00:20:26.417 08:11:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:26.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.417 --rc genhtml_branch_coverage=1 00:20:26.417 --rc genhtml_function_coverage=1 00:20:26.417 --rc genhtml_legend=1 00:20:26.417 --rc geninfo_all_blocks=1 00:20:26.417 --rc geninfo_unexecuted_blocks=1 00:20:26.417 00:20:26.417 ' 00:20:26.417 08:11:37 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:26.417 08:11:37 -- nvmf/common.sh@7 -- # uname -s 00:20:26.417 08:11:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.417 08:11:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.417 08:11:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.417 08:11:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.417 08:11:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.417 08:11:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.417 08:11:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.417 08:11:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.417 08:11:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.417 08:11:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.417 08:11:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:20:26.417 08:11:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:20:26.417 08:11:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.417 08:11:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.417 08:11:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:26.417 08:11:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:26.417 08:11:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.417 08:11:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.417 08:11:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.417 08:11:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.417 08:11:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.417 08:11:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.417 08:11:37 -- paths/export.sh@5 -- # export PATH 00:20:26.417 08:11:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.417 08:11:37 -- nvmf/common.sh@46 -- # : 0 00:20:26.417 08:11:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:26.417 08:11:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:26.417 08:11:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:26.417 08:11:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.417 08:11:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.417 08:11:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:26.417 08:11:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:26.417 08:11:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:26.417 08:11:37 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:26.417 08:11:37 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:26.417 08:11:37 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:26.417 08:11:37 -- host/perf.sh@17 -- # nvmftestinit 00:20:26.417 08:11:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:26.417 08:11:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.417 08:11:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:26.417 08:11:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:26.417 08:11:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:26.417 08:11:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.417 08:11:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.417 08:11:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.417 08:11:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:26.417 08:11:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:26.417 08:11:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:26.417 08:11:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:26.417 08:11:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:26.417 08:11:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:26.417 08:11:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.417 08:11:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.417 08:11:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:26.417 08:11:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:26.417 08:11:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:26.417 08:11:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:26.417 08:11:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:26.417 08:11:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.417 08:11:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:26.417 08:11:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:26.417 08:11:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:26.417 08:11:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:26.417 08:11:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:26.417 08:11:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:26.418 Cannot find device "nvmf_tgt_br" 00:20:26.418 08:11:37 -- nvmf/common.sh@154 -- # true 00:20:26.418 08:11:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:26.418 Cannot find device "nvmf_tgt_br2" 00:20:26.418 08:11:37 -- nvmf/common.sh@155 -- # true 00:20:26.418 08:11:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:26.418 08:11:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:26.418 Cannot find device "nvmf_tgt_br" 00:20:26.418 08:11:37 -- nvmf/common.sh@157 -- # true 00:20:26.418 08:11:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:26.418 Cannot find device "nvmf_tgt_br2" 00:20:26.418 08:11:37 -- nvmf/common.sh@158 -- # true 00:20:26.418 08:11:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:26.418 08:11:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:26.677 08:11:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:26.677 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:26.677 08:11:37 -- nvmf/common.sh@161 -- # true 00:20:26.677 08:11:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:26.677 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:26.677 08:11:37 -- nvmf/common.sh@162 -- # true 00:20:26.677 08:11:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:26.677 08:11:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:26.677 08:11:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:26.677 08:11:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:26.677 08:11:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:26.677 08:11:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:26.677 08:11:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:26.677 08:11:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:26.677 08:11:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:26.677 08:11:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:26.677 08:11:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:26.677 08:11:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:26.677 08:11:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:26.677 08:11:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:26.677 08:11:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:26.677 08:11:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:26.677 08:11:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:26.677 08:11:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:26.677 08:11:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:26.677 08:11:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:26.678 08:11:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:26.678 08:11:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:26.678 08:11:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:26.678 08:11:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:26.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:20:26.678 00:20:26.678 --- 10.0.0.2 ping statistics --- 00:20:26.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.678 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:26.678 08:11:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:26.678 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:26.678 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:20:26.678 00:20:26.678 --- 10.0.0.3 ping statistics --- 00:20:26.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.678 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:20:26.678 08:11:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:26.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:26.678 00:20:26.678 --- 10.0.0.1 ping statistics --- 00:20:26.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.678 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:26.678 08:11:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.678 08:11:37 -- nvmf/common.sh@421 -- # return 0 00:20:26.678 08:11:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:26.678 08:11:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.678 08:11:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:26.678 08:11:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:26.678 08:11:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.678 08:11:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:26.678 08:11:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:26.678 08:11:37 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:26.678 08:11:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:26.678 08:11:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:26.678 08:11:37 -- common/autotest_common.sh@10 -- # set +x 00:20:26.678 08:11:37 -- nvmf/common.sh@469 -- # nvmfpid=93809 00:20:26.678 08:11:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:26.678 08:11:37 -- nvmf/common.sh@470 -- # waitforlisten 93809 00:20:26.678 08:11:37 -- common/autotest_common.sh@829 -- # '[' -z 93809 ']' 00:20:26.678 08:11:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.678 08:11:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:26.678 08:11:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.678 08:11:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:26.678 08:11:37 -- common/autotest_common.sh@10 -- # set +x 00:20:26.678 [2024-12-07 08:11:37.943093] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:26.678 [2024-12-07 08:11:37.943181] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.937 [2024-12-07 08:11:38.078970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.937 [2024-12-07 08:11:38.152474] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:26.937 [2024-12-07 08:11:38.152639] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.937 [2024-12-07 08:11:38.152653] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.937 [2024-12-07 08:11:38.152661] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.937 [2024-12-07 08:11:38.152798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.937 [2024-12-07 08:11:38.152943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.937 [2024-12-07 08:11:38.153529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.937 [2024-12-07 08:11:38.153564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.871 08:11:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:27.872 08:11:38 -- common/autotest_common.sh@862 -- # return 0 00:20:27.872 08:11:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:27.872 08:11:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:27.872 08:11:38 -- common/autotest_common.sh@10 -- # set +x 00:20:27.872 08:11:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.872 08:11:39 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:27.872 08:11:39 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:28.438 08:11:39 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:28.438 08:11:39 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:28.438 08:11:39 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:28.438 08:11:39 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:29.003 08:11:39 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:29.003 08:11:39 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:29.003 08:11:39 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:29.003 08:11:39 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:29.003 08:11:39 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:29.003 [2024-12-07 08:11:40.239975] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.003 08:11:40 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:29.582 08:11:40 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:29.582 08:11:40 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:29.582 08:11:40 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:29.582 08:11:40 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:29.877 08:11:40 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:30.156 [2024-12-07 08:11:41.193306] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.156 08:11:41 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:30.429 08:11:41 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:30.429 08:11:41 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:30.429 08:11:41 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:30.429 08:11:41 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:31.366 Initializing NVMe Controllers 00:20:31.366 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:31.366 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:31.366 Initialization complete. Launching workers. 00:20:31.366 ======================================================== 00:20:31.366 Latency(us) 00:20:31.366 Device Information : IOPS MiB/s Average min max 00:20:31.366 PCIE (0000:00:06.0) NSID 1 from core 0: 22690.00 88.63 1410.07 235.15 8049.44 00:20:31.366 ======================================================== 00:20:31.366 Total : 22690.00 88.63 1410.07 235.15 8049.44 00:20:31.366 00:20:31.366 08:11:42 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:32.745 Initializing NVMe Controllers 00:20:32.745 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:32.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:32.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:32.745 Initialization complete. Launching workers. 00:20:32.745 ======================================================== 00:20:32.745 Latency(us) 00:20:32.745 Device Information : IOPS MiB/s Average min max 00:20:32.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3804.84 14.86 262.53 100.68 7244.98 00:20:32.745 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.64 0.48 8153.79 5952.38 12014.18 00:20:32.745 ======================================================== 00:20:32.745 Total : 3927.48 15.34 508.94 100.68 12014.18 00:20:32.745 00:20:32.745 08:11:43 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:34.121 [2024-12-07 08:11:45.164227] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c9260 is same with the state(5) to be set 00:20:34.121 [2024-12-07 08:11:45.164742] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c9260 is same with the state(5) to be set 00:20:34.121 [2024-12-07 08:11:45.164848] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c9260 is same with the state(5) to be set 00:20:34.121 [2024-12-07 08:11:45.164916] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c9260 is same with the state(5) to be set 00:20:34.121 [2024-12-07 08:11:45.164984] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c9260 is same with the state(5) to be set 00:20:34.121 [2024-12-07 08:11:45.165045] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c9260 is same with the state(5) to be set 00:20:34.121 [2024-12-07 08:11:45.165105] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c9260 is same with the state(5) to be set 00:20:34.121 [2024-12-07 08:11:45.165179] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c9260 is same with the state(5) to be set 00:20:34.121 [2024-12-07 08:11:45.165282] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c9260 is same with the state(5) to be set 00:20:34.121 [2024-12-07 08:11:45.165299] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6c9260 is same with the state(5) to be set 00:20:34.121 Initializing NVMe Controllers 00:20:34.121 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:34.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:34.121 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:34.121 Initialization complete. Launching workers. 00:20:34.121 ======================================================== 00:20:34.121 Latency(us) 00:20:34.121 Device Information : IOPS MiB/s Average min max 00:20:34.121 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9535.54 37.25 3356.36 491.20 9369.21 00:20:34.121 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2673.38 10.44 12089.82 6690.69 20172.77 00:20:34.121 ======================================================== 00:20:34.121 Total : 12208.92 47.69 5268.72 491.20 20172.77 00:20:34.121 00:20:34.121 08:11:45 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:34.121 08:11:45 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:36.652 Initializing NVMe Controllers 00:20:36.652 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:36.652 Controller IO queue size 128, less than required. 00:20:36.652 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:36.652 Controller IO queue size 128, less than required. 00:20:36.652 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:36.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:36.653 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:36.653 Initialization complete. Launching workers. 00:20:36.653 ======================================================== 00:20:36.653 Latency(us) 00:20:36.653 Device Information : IOPS MiB/s Average min max 00:20:36.653 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1571.82 392.96 82542.28 56886.26 170763.97 00:20:36.653 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 612.74 153.18 220340.71 81239.20 351223.64 00:20:36.653 ======================================================== 00:20:36.653 Total : 2184.56 546.14 121192.68 56886.26 351223.64 00:20:36.653 00:20:36.653 08:11:47 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:36.910 No valid NVMe controllers or AIO or URING devices found 00:20:36.910 Initializing NVMe Controllers 00:20:36.910 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:36.910 Controller IO queue size 128, less than required. 00:20:36.910 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:36.910 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:36.910 Controller IO queue size 128, less than required. 00:20:36.910 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:36.910 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:36.910 WARNING: Some requested NVMe devices were skipped 00:20:36.910 08:11:48 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:39.446 Initializing NVMe Controllers 00:20:39.446 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:39.446 Controller IO queue size 128, less than required. 00:20:39.446 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.446 Controller IO queue size 128, less than required. 00:20:39.446 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:39.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:39.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:39.446 Initialization complete. Launching workers. 00:20:39.446 00:20:39.446 ==================== 00:20:39.446 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:39.446 TCP transport: 00:20:39.446 polls: 10943 00:20:39.446 idle_polls: 7767 00:20:39.446 sock_completions: 3176 00:20:39.446 nvme_completions: 4253 00:20:39.446 submitted_requests: 6518 00:20:39.446 queued_requests: 1 00:20:39.446 00:20:39.446 ==================== 00:20:39.446 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:39.446 TCP transport: 00:20:39.446 polls: 10762 00:20:39.446 idle_polls: 7513 00:20:39.446 sock_completions: 3249 00:20:39.446 nvme_completions: 6279 00:20:39.446 submitted_requests: 9643 00:20:39.446 queued_requests: 1 00:20:39.446 ======================================================== 00:20:39.446 Latency(us) 00:20:39.446 Device Information : IOPS MiB/s Average min max 00:20:39.446 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1126.07 281.52 116797.53 69373.85 194310.46 00:20:39.446 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1631.87 407.97 79474.51 44349.02 119552.47 00:20:39.446 ======================================================== 00:20:39.446 Total : 2757.94 689.49 94713.50 44349.02 194310.46 00:20:39.446 00:20:39.446 08:11:50 -- host/perf.sh@66 -- # sync 00:20:39.446 08:11:50 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.704 08:11:50 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:39.704 08:11:50 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:39.704 08:11:50 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:39.964 08:11:51 -- host/perf.sh@72 -- # ls_guid=31de35f1-a823-4413-9d82-ce7e03e4399f 00:20:39.964 08:11:51 -- host/perf.sh@73 -- # get_lvs_free_mb 31de35f1-a823-4413-9d82-ce7e03e4399f 00:20:39.964 08:11:51 -- common/autotest_common.sh@1353 -- # local lvs_uuid=31de35f1-a823-4413-9d82-ce7e03e4399f 00:20:39.964 08:11:51 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:39.964 08:11:51 -- common/autotest_common.sh@1355 -- # local fc 00:20:39.964 08:11:51 -- common/autotest_common.sh@1356 -- # local cs 00:20:39.964 08:11:51 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:40.223 08:11:51 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:40.223 { 00:20:40.223 "base_bdev": "Nvme0n1", 00:20:40.223 "block_size": 4096, 00:20:40.223 "cluster_size": 4194304, 00:20:40.223 "free_clusters": 1278, 00:20:40.223 "name": "lvs_0", 00:20:40.223 "total_data_clusters": 1278, 00:20:40.223 "uuid": "31de35f1-a823-4413-9d82-ce7e03e4399f" 00:20:40.223 } 00:20:40.223 ]' 00:20:40.223 08:11:51 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="31de35f1-a823-4413-9d82-ce7e03e4399f") .free_clusters' 00:20:40.480 08:11:51 -- common/autotest_common.sh@1358 -- # fc=1278 00:20:40.481 08:11:51 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="31de35f1-a823-4413-9d82-ce7e03e4399f") .cluster_size' 00:20:40.481 08:11:51 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:40.481 08:11:51 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:20:40.481 5112 00:20:40.481 08:11:51 -- common/autotest_common.sh@1363 -- # echo 5112 00:20:40.481 08:11:51 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:40.481 08:11:51 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 31de35f1-a823-4413-9d82-ce7e03e4399f lbd_0 5112 00:20:40.738 08:11:51 -- host/perf.sh@80 -- # lb_guid=ea138a68-9bb3-4e70-87ea-be49c5c29c6c 00:20:40.738 08:11:51 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore ea138a68-9bb3-4e70-87ea-be49c5c29c6c lvs_n_0 00:20:40.997 08:11:52 -- host/perf.sh@83 -- # ls_nested_guid=863e1a57-b8b2-4e03-a137-39786305c09a 00:20:40.997 08:11:52 -- host/perf.sh@84 -- # get_lvs_free_mb 863e1a57-b8b2-4e03-a137-39786305c09a 00:20:40.997 08:11:52 -- common/autotest_common.sh@1353 -- # local lvs_uuid=863e1a57-b8b2-4e03-a137-39786305c09a 00:20:40.997 08:11:52 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:40.997 08:11:52 -- common/autotest_common.sh@1355 -- # local fc 00:20:40.997 08:11:52 -- common/autotest_common.sh@1356 -- # local cs 00:20:40.997 08:11:52 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:41.256 08:11:52 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:41.256 { 00:20:41.256 "base_bdev": "Nvme0n1", 00:20:41.256 "block_size": 4096, 00:20:41.256 "cluster_size": 4194304, 00:20:41.256 "free_clusters": 0, 00:20:41.256 "name": "lvs_0", 00:20:41.256 "total_data_clusters": 1278, 00:20:41.256 "uuid": "31de35f1-a823-4413-9d82-ce7e03e4399f" 00:20:41.256 }, 00:20:41.256 { 00:20:41.256 "base_bdev": "ea138a68-9bb3-4e70-87ea-be49c5c29c6c", 00:20:41.256 "block_size": 4096, 00:20:41.256 "cluster_size": 4194304, 00:20:41.256 "free_clusters": 1276, 00:20:41.256 "name": "lvs_n_0", 00:20:41.256 "total_data_clusters": 1276, 00:20:41.256 "uuid": "863e1a57-b8b2-4e03-a137-39786305c09a" 00:20:41.256 } 00:20:41.256 ]' 00:20:41.256 08:11:52 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="863e1a57-b8b2-4e03-a137-39786305c09a") .free_clusters' 00:20:41.256 08:11:52 -- common/autotest_common.sh@1358 -- # fc=1276 00:20:41.256 08:11:52 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="863e1a57-b8b2-4e03-a137-39786305c09a") .cluster_size' 00:20:41.515 08:11:52 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:41.515 08:11:52 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:20:41.515 5104 00:20:41.515 08:11:52 -- common/autotest_common.sh@1363 -- # echo 5104 00:20:41.515 08:11:52 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:41.515 08:11:52 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 863e1a57-b8b2-4e03-a137-39786305c09a lbd_nest_0 5104 00:20:41.774 08:11:52 -- host/perf.sh@88 -- # lb_nested_guid=4ed4e9fb-5acd-4d1f-9902-a2233febc88b 00:20:41.774 08:11:52 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:42.033 08:11:53 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:42.033 08:11:53 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 4ed4e9fb-5acd-4d1f-9902-a2233febc88b 00:20:42.033 08:11:53 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:42.291 08:11:53 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:42.291 08:11:53 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:42.291 08:11:53 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:42.291 08:11:53 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:42.291 08:11:53 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:42.550 No valid NVMe controllers or AIO or URING devices found 00:20:42.811 Initializing NVMe Controllers 00:20:42.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:42.811 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:42.811 WARNING: Some requested NVMe devices were skipped 00:20:42.811 08:11:53 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:42.811 08:11:53 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:55.011 Initializing NVMe Controllers 00:20:55.011 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:55.011 Initialization complete. Launching workers. 00:20:55.011 ======================================================== 00:20:55.011 Latency(us) 00:20:55.011 Device Information : IOPS MiB/s Average min max 00:20:55.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 917.89 114.74 1088.53 356.37 8558.10 00:20:55.011 ======================================================== 00:20:55.011 Total : 917.89 114.74 1088.53 356.37 8558.10 00:20:55.011 00:20:55.011 08:12:04 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:55.011 08:12:04 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:55.011 08:12:04 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:55.011 No valid NVMe controllers or AIO or URING devices found 00:20:55.011 Initializing NVMe Controllers 00:20:55.011 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.011 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:55.011 WARNING: Some requested NVMe devices were skipped 00:20:55.011 08:12:04 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:55.011 08:12:04 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:04.978 Initializing NVMe Controllers 00:21:04.978 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:04.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:04.978 Initialization complete. Launching workers. 00:21:04.978 ======================================================== 00:21:04.978 Latency(us) 00:21:04.978 Device Information : IOPS MiB/s Average min max 00:21:04.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1080.87 135.11 29620.34 8082.64 250164.29 00:21:04.978 ======================================================== 00:21:04.978 Total : 1080.87 135.11 29620.34 8082.64 250164.29 00:21:04.978 00:21:04.978 08:12:14 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:04.978 08:12:14 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:04.978 08:12:14 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:04.978 No valid NVMe controllers or AIO or URING devices found 00:21:04.978 Initializing NVMe Controllers 00:21:04.978 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:04.978 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:04.978 WARNING: Some requested NVMe devices were skipped 00:21:04.978 08:12:14 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:04.978 08:12:14 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:14.965 Initializing NVMe Controllers 00:21:14.965 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:14.965 Controller IO queue size 128, less than required. 00:21:14.965 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:14.965 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:14.965 Initialization complete. Launching workers. 00:21:14.965 ======================================================== 00:21:14.965 Latency(us) 00:21:14.965 Device Information : IOPS MiB/s Average min max 00:21:14.965 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4111.34 513.92 31173.57 12616.07 67193.41 00:21:14.965 ======================================================== 00:21:14.965 Total : 4111.34 513.92 31173.57 12616.07 67193.41 00:21:14.965 00:21:14.966 08:12:25 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:14.966 08:12:25 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4ed4e9fb-5acd-4d1f-9902-a2233febc88b 00:21:14.966 08:12:25 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:14.966 08:12:26 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ea138a68-9bb3-4e70-87ea-be49c5c29c6c 00:21:15.224 08:12:26 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:15.481 08:12:26 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:15.481 08:12:26 -- host/perf.sh@114 -- # nvmftestfini 00:21:15.481 08:12:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:15.481 08:12:26 -- nvmf/common.sh@116 -- # sync 00:21:15.481 08:12:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:15.481 08:12:26 -- nvmf/common.sh@119 -- # set +e 00:21:15.481 08:12:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:15.481 08:12:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:15.481 rmmod nvme_tcp 00:21:15.481 rmmod nvme_fabrics 00:21:15.481 rmmod nvme_keyring 00:21:15.481 08:12:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:15.481 08:12:26 -- nvmf/common.sh@123 -- # set -e 00:21:15.481 08:12:26 -- nvmf/common.sh@124 -- # return 0 00:21:15.481 08:12:26 -- nvmf/common.sh@477 -- # '[' -n 93809 ']' 00:21:15.481 08:12:26 -- nvmf/common.sh@478 -- # killprocess 93809 00:21:15.481 08:12:26 -- common/autotest_common.sh@936 -- # '[' -z 93809 ']' 00:21:15.481 08:12:26 -- common/autotest_common.sh@940 -- # kill -0 93809 00:21:15.481 08:12:26 -- common/autotest_common.sh@941 -- # uname 00:21:15.481 08:12:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:15.481 08:12:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93809 00:21:15.738 killing process with pid 93809 00:21:15.738 08:12:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:15.738 08:12:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:15.738 08:12:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93809' 00:21:15.738 08:12:26 -- common/autotest_common.sh@955 -- # kill 93809 00:21:15.738 08:12:26 -- common/autotest_common.sh@960 -- # wait 93809 00:21:16.306 08:12:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:16.306 08:12:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:16.306 08:12:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:16.306 08:12:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:16.306 08:12:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:16.306 08:12:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.306 08:12:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:16.306 08:12:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.306 08:12:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:16.306 00:21:16.306 real 0m50.157s 00:21:16.306 user 3m9.957s 00:21:16.306 sys 0m10.583s 00:21:16.306 08:12:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:16.306 08:12:27 -- common/autotest_common.sh@10 -- # set +x 00:21:16.306 ************************************ 00:21:16.306 END TEST nvmf_perf 00:21:16.306 ************************************ 00:21:16.306 08:12:27 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:16.306 08:12:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:16.306 08:12:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:16.306 08:12:27 -- common/autotest_common.sh@10 -- # set +x 00:21:16.565 ************************************ 00:21:16.565 START TEST nvmf_fio_host 00:21:16.565 ************************************ 00:21:16.565 08:12:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:16.565 * Looking for test storage... 00:21:16.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:16.565 08:12:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:16.565 08:12:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:16.565 08:12:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:16.565 08:12:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:16.565 08:12:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:16.565 08:12:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:16.565 08:12:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:16.565 08:12:27 -- scripts/common.sh@335 -- # IFS=.-: 00:21:16.565 08:12:27 -- scripts/common.sh@335 -- # read -ra ver1 00:21:16.565 08:12:27 -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.565 08:12:27 -- scripts/common.sh@336 -- # read -ra ver2 00:21:16.565 08:12:27 -- scripts/common.sh@337 -- # local 'op=<' 00:21:16.565 08:12:27 -- scripts/common.sh@339 -- # ver1_l=2 00:21:16.565 08:12:27 -- scripts/common.sh@340 -- # ver2_l=1 00:21:16.565 08:12:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:16.565 08:12:27 -- scripts/common.sh@343 -- # case "$op" in 00:21:16.565 08:12:27 -- scripts/common.sh@344 -- # : 1 00:21:16.565 08:12:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:16.565 08:12:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.565 08:12:27 -- scripts/common.sh@364 -- # decimal 1 00:21:16.565 08:12:27 -- scripts/common.sh@352 -- # local d=1 00:21:16.565 08:12:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.565 08:12:27 -- scripts/common.sh@354 -- # echo 1 00:21:16.565 08:12:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:16.565 08:12:27 -- scripts/common.sh@365 -- # decimal 2 00:21:16.565 08:12:27 -- scripts/common.sh@352 -- # local d=2 00:21:16.565 08:12:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.565 08:12:27 -- scripts/common.sh@354 -- # echo 2 00:21:16.565 08:12:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:16.565 08:12:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:16.565 08:12:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:16.565 08:12:27 -- scripts/common.sh@367 -- # return 0 00:21:16.565 08:12:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.565 08:12:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:16.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.565 --rc genhtml_branch_coverage=1 00:21:16.565 --rc genhtml_function_coverage=1 00:21:16.565 --rc genhtml_legend=1 00:21:16.565 --rc geninfo_all_blocks=1 00:21:16.565 --rc geninfo_unexecuted_blocks=1 00:21:16.565 00:21:16.565 ' 00:21:16.565 08:12:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:16.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.566 --rc genhtml_branch_coverage=1 00:21:16.566 --rc genhtml_function_coverage=1 00:21:16.566 --rc genhtml_legend=1 00:21:16.566 --rc geninfo_all_blocks=1 00:21:16.566 --rc geninfo_unexecuted_blocks=1 00:21:16.566 00:21:16.566 ' 00:21:16.566 08:12:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:16.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.566 --rc genhtml_branch_coverage=1 00:21:16.566 --rc genhtml_function_coverage=1 00:21:16.566 --rc genhtml_legend=1 00:21:16.566 --rc geninfo_all_blocks=1 00:21:16.566 --rc geninfo_unexecuted_blocks=1 00:21:16.566 00:21:16.566 ' 00:21:16.566 08:12:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:16.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.566 --rc genhtml_branch_coverage=1 00:21:16.566 --rc genhtml_function_coverage=1 00:21:16.566 --rc genhtml_legend=1 00:21:16.566 --rc geninfo_all_blocks=1 00:21:16.566 --rc geninfo_unexecuted_blocks=1 00:21:16.566 00:21:16.566 ' 00:21:16.566 08:12:27 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:16.566 08:12:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.566 08:12:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.566 08:12:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.566 08:12:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.566 08:12:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.566 08:12:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.566 08:12:27 -- paths/export.sh@5 -- # export PATH 00:21:16.566 08:12:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.566 08:12:27 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:16.566 08:12:27 -- nvmf/common.sh@7 -- # uname -s 00:21:16.566 08:12:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.566 08:12:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.566 08:12:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.566 08:12:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.566 08:12:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.566 08:12:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.566 08:12:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.566 08:12:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.566 08:12:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.566 08:12:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.566 08:12:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:21:16.566 08:12:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:21:16.566 08:12:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.566 08:12:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.566 08:12:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:16.566 08:12:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:16.566 08:12:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.566 08:12:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.566 08:12:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.566 08:12:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.566 08:12:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.566 08:12:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.566 08:12:27 -- paths/export.sh@5 -- # export PATH 00:21:16.566 08:12:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.566 08:12:27 -- nvmf/common.sh@46 -- # : 0 00:21:16.566 08:12:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:16.566 08:12:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:16.566 08:12:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:16.566 08:12:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.566 08:12:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.566 08:12:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:16.566 08:12:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:16.566 08:12:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:16.566 08:12:27 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:16.566 08:12:27 -- host/fio.sh@14 -- # nvmftestinit 00:21:16.566 08:12:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:16.566 08:12:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.566 08:12:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:16.566 08:12:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:16.566 08:12:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:16.566 08:12:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.566 08:12:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:16.566 08:12:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.566 08:12:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:16.566 08:12:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:16.566 08:12:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:16.566 08:12:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:16.566 08:12:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:16.566 08:12:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:16.566 08:12:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.566 08:12:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.566 08:12:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:16.566 08:12:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:16.566 08:12:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:16.566 08:12:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:16.566 08:12:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:16.566 08:12:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.566 08:12:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:16.566 08:12:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:16.566 08:12:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:16.566 08:12:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:16.566 08:12:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:16.566 08:12:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:16.566 Cannot find device "nvmf_tgt_br" 00:21:16.566 08:12:27 -- nvmf/common.sh@154 -- # true 00:21:16.566 08:12:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:16.825 Cannot find device "nvmf_tgt_br2" 00:21:16.825 08:12:27 -- nvmf/common.sh@155 -- # true 00:21:16.825 08:12:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:16.825 08:12:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:16.825 Cannot find device "nvmf_tgt_br" 00:21:16.825 08:12:27 -- nvmf/common.sh@157 -- # true 00:21:16.825 08:12:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:16.825 Cannot find device "nvmf_tgt_br2" 00:21:16.825 08:12:27 -- nvmf/common.sh@158 -- # true 00:21:16.825 08:12:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:16.825 08:12:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:16.825 08:12:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:16.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:16.825 08:12:27 -- nvmf/common.sh@161 -- # true 00:21:16.825 08:12:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:16.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:16.825 08:12:27 -- nvmf/common.sh@162 -- # true 00:21:16.825 08:12:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:16.825 08:12:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:16.825 08:12:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:16.825 08:12:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:16.825 08:12:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:16.825 08:12:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:16.825 08:12:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:16.825 08:12:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:16.825 08:12:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:16.825 08:12:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:16.825 08:12:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:16.825 08:12:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:16.825 08:12:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:16.825 08:12:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:16.825 08:12:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:16.825 08:12:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:16.825 08:12:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:16.825 08:12:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:16.825 08:12:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:16.825 08:12:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:16.825 08:12:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:16.825 08:12:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:16.825 08:12:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:16.825 08:12:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:16.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:21:16.825 00:21:16.825 --- 10.0.0.2 ping statistics --- 00:21:16.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.825 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:21:16.825 08:12:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:16.825 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:16.825 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:21:16.825 00:21:16.825 --- 10.0.0.3 ping statistics --- 00:21:16.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.825 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:17.084 08:12:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:17.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:17.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:17.084 00:21:17.084 --- 10.0.0.1 ping statistics --- 00:21:17.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.084 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:17.084 08:12:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:17.084 08:12:28 -- nvmf/common.sh@421 -- # return 0 00:21:17.084 08:12:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:17.084 08:12:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:17.084 08:12:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:17.084 08:12:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:17.084 08:12:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:17.084 08:12:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:17.084 08:12:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:17.084 08:12:28 -- host/fio.sh@16 -- # [[ y != y ]] 00:21:17.084 08:12:28 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:17.084 08:12:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:17.084 08:12:28 -- common/autotest_common.sh@10 -- # set +x 00:21:17.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.084 08:12:28 -- host/fio.sh@24 -- # nvmfpid=94783 00:21:17.084 08:12:28 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:17.084 08:12:28 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:17.084 08:12:28 -- host/fio.sh@28 -- # waitforlisten 94783 00:21:17.084 08:12:28 -- common/autotest_common.sh@829 -- # '[' -z 94783 ']' 00:21:17.084 08:12:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.084 08:12:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:17.084 08:12:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.084 08:12:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:17.084 08:12:28 -- common/autotest_common.sh@10 -- # set +x 00:21:17.084 [2024-12-07 08:12:28.189559] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:17.084 [2024-12-07 08:12:28.189693] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.084 [2024-12-07 08:12:28.331502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:17.342 [2024-12-07 08:12:28.408560] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:17.342 [2024-12-07 08:12:28.408708] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.342 [2024-12-07 08:12:28.408721] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.342 [2024-12-07 08:12:28.408729] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.342 [2024-12-07 08:12:28.408872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.342 [2024-12-07 08:12:28.409298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.342 [2024-12-07 08:12:28.409666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:17.342 [2024-12-07 08:12:28.409671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.908 08:12:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:17.908 08:12:29 -- common/autotest_common.sh@862 -- # return 0 00:21:17.908 08:12:29 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:18.166 [2024-12-07 08:12:29.362049] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.166 08:12:29 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:18.166 08:12:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:18.166 08:12:29 -- common/autotest_common.sh@10 -- # set +x 00:21:18.166 08:12:29 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:18.424 Malloc1 00:21:18.424 08:12:29 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:18.682 08:12:29 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:18.941 08:12:30 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:19.199 [2024-12-07 08:12:30.385671] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.199 08:12:30 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:19.457 08:12:30 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:19.457 08:12:30 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:19.457 08:12:30 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:19.457 08:12:30 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:19.457 08:12:30 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:19.457 08:12:30 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:19.457 08:12:30 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:19.457 08:12:30 -- common/autotest_common.sh@1330 -- # shift 00:21:19.457 08:12:30 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:19.457 08:12:30 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:19.457 08:12:30 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:19.457 08:12:30 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:19.457 08:12:30 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:19.457 08:12:30 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:19.457 08:12:30 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:19.457 08:12:30 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:19.457 08:12:30 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:19.457 08:12:30 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:19.457 08:12:30 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:19.457 08:12:30 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:19.457 08:12:30 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:19.457 08:12:30 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:19.457 08:12:30 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:19.717 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:19.717 fio-3.35 00:21:19.717 Starting 1 thread 00:21:22.248 00:21:22.248 test: (groupid=0, jobs=1): err= 0: pid=94909: Sat Dec 7 08:12:33 2024 00:21:22.248 read: IOPS=9590, BW=37.5MiB/s (39.3MB/s)(75.2MiB/2006msec) 00:21:22.248 slat (nsec): min=1956, max=201194, avg=2521.06, stdev=2482.18 00:21:22.248 clat (usec): min=2344, max=12716, avg=7023.32, stdev=619.74 00:21:22.248 lat (usec): min=2377, max=12718, avg=7025.84, stdev=619.62 00:21:22.248 clat percentiles (usec): 00:21:22.248 | 1.00th=[ 5800], 5.00th=[ 6194], 10.00th=[ 6325], 20.00th=[ 6587], 00:21:22.248 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:21:22.248 | 70.00th=[ 7242], 80.00th=[ 7439], 90.00th=[ 7767], 95.00th=[ 8029], 00:21:22.248 | 99.00th=[ 8586], 99.50th=[ 9765], 99.90th=[10814], 99.95th=[11207], 00:21:22.248 | 99.99th=[12649] 00:21:22.248 bw ( KiB/s): min=37184, max=39352, per=99.84%, avg=38301.00, stdev=1005.31, samples=4 00:21:22.248 iops : min= 9296, max= 9838, avg=9575.25, stdev=251.33, samples=4 00:21:22.248 write: IOPS=9597, BW=37.5MiB/s (39.3MB/s)(75.2MiB/2006msec); 0 zone resets 00:21:22.248 slat (usec): min=2, max=157, avg= 2.65, stdev= 1.96 00:21:22.248 clat (usec): min=1532, max=12076, avg=6262.90, stdev=511.57 00:21:22.248 lat (usec): min=1546, max=12078, avg=6265.55, stdev=511.46 00:21:22.248 clat percentiles (usec): 00:21:22.248 | 1.00th=[ 5145], 5.00th=[ 5538], 10.00th=[ 5669], 20.00th=[ 5932], 00:21:22.248 | 30.00th=[ 6063], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6390], 00:21:22.248 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6783], 95.00th=[ 6980], 00:21:22.248 | 99.00th=[ 7373], 99.50th=[ 7832], 99.90th=[ 9896], 99.95th=[10421], 00:21:22.248 | 99.99th=[11994] 00:21:22.248 bw ( KiB/s): min=37912, max=38592, per=99.90%, avg=38349.25, stdev=307.68, samples=4 00:21:22.248 iops : min= 9478, max= 9648, avg=9587.25, stdev=76.87, samples=4 00:21:22.248 lat (msec) : 2=0.03%, 4=0.14%, 10=99.62%, 20=0.22% 00:21:22.248 cpu : usr=64.34%, sys=26.13%, ctx=20, majf=0, minf=5 00:21:22.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:22.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:22.248 issued rwts: total=19239,19252,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:22.248 00:21:22.248 Run status group 0 (all jobs): 00:21:22.248 READ: bw=37.5MiB/s (39.3MB/s), 37.5MiB/s-37.5MiB/s (39.3MB/s-39.3MB/s), io=75.2MiB (78.8MB), run=2006-2006msec 00:21:22.248 WRITE: bw=37.5MiB/s (39.3MB/s), 37.5MiB/s-37.5MiB/s (39.3MB/s-39.3MB/s), io=75.2MiB (78.9MB), run=2006-2006msec 00:21:22.248 08:12:33 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:22.248 08:12:33 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:22.248 08:12:33 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:22.248 08:12:33 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:22.248 08:12:33 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:22.248 08:12:33 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:22.248 08:12:33 -- common/autotest_common.sh@1330 -- # shift 00:21:22.248 08:12:33 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:22.248 08:12:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:22.248 08:12:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:22.248 08:12:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:22.248 08:12:33 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:22.248 08:12:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:22.248 08:12:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:22.248 08:12:33 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:22.248 08:12:33 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:22.248 08:12:33 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:22.248 08:12:33 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:22.248 08:12:33 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:22.248 08:12:33 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:22.248 08:12:33 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:22.248 08:12:33 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:22.248 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:22.248 fio-3.35 00:21:22.248 Starting 1 thread 00:21:24.775 00:21:24.775 test: (groupid=0, jobs=1): err= 0: pid=94952: Sat Dec 7 08:12:35 2024 00:21:24.775 read: IOPS=8412, BW=131MiB/s (138MB/s)(264MiB/2006msec) 00:21:24.775 slat (usec): min=2, max=124, avg= 3.79, stdev= 2.26 00:21:24.775 clat (usec): min=1891, max=17823, avg=9172.82, stdev=2209.13 00:21:24.775 lat (usec): min=1895, max=17826, avg=9176.61, stdev=2209.26 00:21:24.775 clat percentiles (usec): 00:21:24.776 | 1.00th=[ 4883], 5.00th=[ 5800], 10.00th=[ 6456], 20.00th=[ 7242], 00:21:24.776 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9634], 00:21:24.776 | 70.00th=[10290], 80.00th=[11076], 90.00th=[12125], 95.00th=[13042], 00:21:24.776 | 99.00th=[14615], 99.50th=[15533], 99.90th=[17171], 99.95th=[17433], 00:21:24.776 | 99.99th=[17695] 00:21:24.776 bw ( KiB/s): min=63904, max=72256, per=49.92%, avg=67192.00, stdev=3829.89, samples=4 00:21:24.776 iops : min= 3994, max= 4516, avg=4199.50, stdev=239.37, samples=4 00:21:24.776 write: IOPS=4709, BW=73.6MiB/s (77.2MB/s)(137MiB/1862msec); 0 zone resets 00:21:24.776 slat (usec): min=33, max=351, avg=38.54, stdev= 8.90 00:21:24.776 clat (usec): min=3409, max=17289, avg=10783.50, stdev=1929.63 00:21:24.776 lat (usec): min=3456, max=17325, avg=10822.05, stdev=1930.93 00:21:24.776 clat percentiles (usec): 00:21:24.776 | 1.00th=[ 6915], 5.00th=[ 7963], 10.00th=[ 8455], 20.00th=[ 9110], 00:21:24.776 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10683], 60.00th=[11207], 00:21:24.776 | 70.00th=[11731], 80.00th=[12387], 90.00th=[13304], 95.00th=[14091], 00:21:24.776 | 99.00th=[16057], 99.50th=[16450], 99.90th=[16909], 99.95th=[17171], 00:21:24.776 | 99.99th=[17171] 00:21:24.776 bw ( KiB/s): min=66752, max=74496, per=92.69%, avg=69840.00, stdev=3762.92, samples=4 00:21:24.776 iops : min= 4172, max= 4656, avg=4365.00, stdev=235.18, samples=4 00:21:24.776 lat (msec) : 2=0.02%, 4=0.22%, 10=55.03%, 20=44.74% 00:21:24.776 cpu : usr=71.57%, sys=18.05%, ctx=6, majf=0, minf=1 00:21:24.776 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:24.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.776 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:24.776 issued rwts: total=16875,8769,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:24.776 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:24.776 00:21:24.776 Run status group 0 (all jobs): 00:21:24.776 READ: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=264MiB (276MB), run=2006-2006msec 00:21:24.776 WRITE: bw=73.6MiB/s (77.2MB/s), 73.6MiB/s-73.6MiB/s (77.2MB/s-77.2MB/s), io=137MiB (144MB), run=1862-1862msec 00:21:24.776 08:12:35 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:24.776 08:12:35 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:24.776 08:12:35 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:24.776 08:12:35 -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:24.776 08:12:35 -- common/autotest_common.sh@1508 -- # bdfs=() 00:21:24.776 08:12:35 -- common/autotest_common.sh@1508 -- # local bdfs 00:21:24.776 08:12:35 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:24.776 08:12:35 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:24.776 08:12:35 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:21:24.776 08:12:35 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:21:24.776 08:12:35 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:24.776 08:12:35 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:21:25.035 Nvme0n1 00:21:25.035 08:12:36 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:25.292 08:12:36 -- host/fio.sh@53 -- # ls_guid=3a27b1ba-a6b9-4b2b-9c80-c1095519a0a2 00:21:25.292 08:12:36 -- host/fio.sh@54 -- # get_lvs_free_mb 3a27b1ba-a6b9-4b2b-9c80-c1095519a0a2 00:21:25.292 08:12:36 -- common/autotest_common.sh@1353 -- # local lvs_uuid=3a27b1ba-a6b9-4b2b-9c80-c1095519a0a2 00:21:25.292 08:12:36 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:25.292 08:12:36 -- common/autotest_common.sh@1355 -- # local fc 00:21:25.292 08:12:36 -- common/autotest_common.sh@1356 -- # local cs 00:21:25.292 08:12:36 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:25.550 08:12:36 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:25.550 { 00:21:25.550 "base_bdev": "Nvme0n1", 00:21:25.550 "block_size": 4096, 00:21:25.550 "cluster_size": 1073741824, 00:21:25.550 "free_clusters": 4, 00:21:25.550 "name": "lvs_0", 00:21:25.550 "total_data_clusters": 4, 00:21:25.550 "uuid": "3a27b1ba-a6b9-4b2b-9c80-c1095519a0a2" 00:21:25.550 } 00:21:25.550 ]' 00:21:25.550 08:12:36 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="3a27b1ba-a6b9-4b2b-9c80-c1095519a0a2") .free_clusters' 00:21:25.550 08:12:36 -- common/autotest_common.sh@1358 -- # fc=4 00:21:25.550 08:12:36 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="3a27b1ba-a6b9-4b2b-9c80-c1095519a0a2") .cluster_size' 00:21:25.807 08:12:36 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:21:25.807 08:12:36 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:21:25.807 4096 00:21:25.807 08:12:36 -- common/autotest_common.sh@1363 -- # echo 4096 00:21:25.807 08:12:36 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:25.807 78cf6272-ea9e-41e1-9c43-4c41bdc7956f 00:21:25.807 08:12:37 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:26.064 08:12:37 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:26.321 08:12:37 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:26.580 08:12:37 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:26.580 08:12:37 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:26.580 08:12:37 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:26.580 08:12:37 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:26.580 08:12:37 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:26.580 08:12:37 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:26.580 08:12:37 -- common/autotest_common.sh@1330 -- # shift 00:21:26.580 08:12:37 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:26.580 08:12:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:26.580 08:12:37 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:26.580 08:12:37 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:26.580 08:12:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:26.841 08:12:37 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:26.841 08:12:37 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:26.841 08:12:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:26.841 08:12:37 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:26.841 08:12:37 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:26.841 08:12:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:26.841 08:12:37 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:26.841 08:12:37 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:26.841 08:12:37 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:26.841 08:12:37 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:26.841 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:26.841 fio-3.35 00:21:26.841 Starting 1 thread 00:21:29.369 00:21:29.369 test: (groupid=0, jobs=1): err= 0: pid=95110: Sat Dec 7 08:12:40 2024 00:21:29.369 read: IOPS=6682, BW=26.1MiB/s (27.4MB/s)(52.4MiB/2009msec) 00:21:29.369 slat (nsec): min=1930, max=342441, avg=2436.80, stdev=3799.21 00:21:29.369 clat (usec): min=4023, max=15882, avg=10146.45, stdev=915.95 00:21:29.369 lat (usec): min=4032, max=15885, avg=10148.89, stdev=915.74 00:21:29.369 clat percentiles (usec): 00:21:29.369 | 1.00th=[ 8160], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9372], 00:21:29.369 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:21:29.369 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:21:29.369 | 99.00th=[12387], 99.50th=[12649], 99.90th=[13698], 99.95th=[15533], 00:21:29.369 | 99.99th=[15795] 00:21:29.369 bw ( KiB/s): min=25608, max=27304, per=99.96%, avg=26720.00, stdev=765.61, samples=4 00:21:29.369 iops : min= 6402, max= 6826, avg=6680.00, stdev=191.40, samples=4 00:21:29.369 write: IOPS=6687, BW=26.1MiB/s (27.4MB/s)(52.5MiB/2009msec); 0 zone resets 00:21:29.369 slat (nsec): min=1971, max=284988, avg=2510.16, stdev=2819.46 00:21:29.369 clat (usec): min=2458, max=16952, avg=8882.64, stdev=815.64 00:21:29.369 lat (usec): min=2471, max=16954, avg=8885.15, stdev=815.53 00:21:29.369 clat percentiles (usec): 00:21:29.369 | 1.00th=[ 7046], 5.00th=[ 7635], 10.00th=[ 7898], 20.00th=[ 8225], 00:21:29.369 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 9110], 00:21:29.369 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10159], 00:21:29.369 | 99.00th=[10683], 99.50th=[10814], 99.90th=[14615], 99.95th=[15664], 00:21:29.369 | 99.99th=[16909] 00:21:29.369 bw ( KiB/s): min=26560, max=26984, per=99.99%, avg=26748.00, stdev=176.79, samples=4 00:21:29.369 iops : min= 6640, max= 6746, avg=6687.00, stdev=44.20, samples=4 00:21:29.369 lat (msec) : 4=0.03%, 10=69.01%, 20=30.96% 00:21:29.369 cpu : usr=71.31%, sys=22.41%, ctx=4, majf=0, minf=5 00:21:29.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:29.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:29.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:29.369 issued rwts: total=13425,13436,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:29.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:29.369 00:21:29.369 Run status group 0 (all jobs): 00:21:29.369 READ: bw=26.1MiB/s (27.4MB/s), 26.1MiB/s-26.1MiB/s (27.4MB/s-27.4MB/s), io=52.4MiB (55.0MB), run=2009-2009msec 00:21:29.369 WRITE: bw=26.1MiB/s (27.4MB/s), 26.1MiB/s-26.1MiB/s (27.4MB/s-27.4MB/s), io=52.5MiB (55.0MB), run=2009-2009msec 00:21:29.369 08:12:40 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:29.369 08:12:40 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:29.625 08:12:40 -- host/fio.sh@64 -- # ls_nested_guid=879ce284-3acb-4ce7-85d2-cc239dae3917 00:21:29.625 08:12:40 -- host/fio.sh@65 -- # get_lvs_free_mb 879ce284-3acb-4ce7-85d2-cc239dae3917 00:21:29.625 08:12:40 -- common/autotest_common.sh@1353 -- # local lvs_uuid=879ce284-3acb-4ce7-85d2-cc239dae3917 00:21:29.625 08:12:40 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:29.625 08:12:40 -- common/autotest_common.sh@1355 -- # local fc 00:21:29.625 08:12:40 -- common/autotest_common.sh@1356 -- # local cs 00:21:29.625 08:12:40 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:29.882 08:12:41 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:29.882 { 00:21:29.882 "base_bdev": "Nvme0n1", 00:21:29.882 "block_size": 4096, 00:21:29.882 "cluster_size": 1073741824, 00:21:29.882 "free_clusters": 0, 00:21:29.882 "name": "lvs_0", 00:21:29.882 "total_data_clusters": 4, 00:21:29.882 "uuid": "3a27b1ba-a6b9-4b2b-9c80-c1095519a0a2" 00:21:29.882 }, 00:21:29.882 { 00:21:29.882 "base_bdev": "78cf6272-ea9e-41e1-9c43-4c41bdc7956f", 00:21:29.882 "block_size": 4096, 00:21:29.882 "cluster_size": 4194304, 00:21:29.882 "free_clusters": 1022, 00:21:29.882 "name": "lvs_n_0", 00:21:29.882 "total_data_clusters": 1022, 00:21:29.882 "uuid": "879ce284-3acb-4ce7-85d2-cc239dae3917" 00:21:29.882 } 00:21:29.882 ]' 00:21:29.882 08:12:41 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="879ce284-3acb-4ce7-85d2-cc239dae3917") .free_clusters' 00:21:29.882 08:12:41 -- common/autotest_common.sh@1358 -- # fc=1022 00:21:29.882 08:12:41 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="879ce284-3acb-4ce7-85d2-cc239dae3917") .cluster_size' 00:21:30.139 4088 00:21:30.139 08:12:41 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:30.139 08:12:41 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:21:30.139 08:12:41 -- common/autotest_common.sh@1363 -- # echo 4088 00:21:30.139 08:12:41 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:30.396 5138380c-44ae-4901-935d-02e869005891 00:21:30.396 08:12:41 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:30.655 08:12:41 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:30.912 08:12:41 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:30.912 08:12:42 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:30.912 08:12:42 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:30.912 08:12:42 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:30.912 08:12:42 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:30.912 08:12:42 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:30.912 08:12:42 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:30.912 08:12:42 -- common/autotest_common.sh@1330 -- # shift 00:21:30.912 08:12:42 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:30.912 08:12:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:30.912 08:12:42 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:30.912 08:12:42 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:30.912 08:12:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:30.912 08:12:42 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:30.912 08:12:42 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:30.912 08:12:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:30.912 08:12:42 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:30.912 08:12:42 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:30.912 08:12:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:31.169 08:12:42 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:31.169 08:12:42 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:31.169 08:12:42 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:31.169 08:12:42 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:31.169 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:31.169 fio-3.35 00:21:31.169 Starting 1 thread 00:21:33.694 00:21:33.694 test: (groupid=0, jobs=1): err= 0: pid=95230: Sat Dec 7 08:12:44 2024 00:21:33.694 read: IOPS=5976, BW=23.3MiB/s (24.5MB/s)(46.9MiB/2008msec) 00:21:33.694 slat (usec): min=2, max=298, avg= 2.67, stdev= 3.68 00:21:33.694 clat (usec): min=4478, max=19656, avg=11412.62, stdev=1085.61 00:21:33.694 lat (usec): min=4486, max=19658, avg=11415.29, stdev=1085.44 00:21:33.694 clat percentiles (usec): 00:21:33.694 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10552], 00:21:33.694 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:21:33.694 | 70.00th=[11863], 80.00th=[12256], 90.00th=[12780], 95.00th=[13173], 00:21:33.694 | 99.00th=[13829], 99.50th=[14222], 99.90th=[18220], 99.95th=[19268], 00:21:33.694 | 99.99th=[19530] 00:21:33.694 bw ( KiB/s): min=22928, max=24328, per=99.73%, avg=23842.00, stdev=632.50, samples=4 00:21:33.694 iops : min= 5732, max= 6082, avg=5960.50, stdev=158.13, samples=4 00:21:33.694 write: IOPS=5961, BW=23.3MiB/s (24.4MB/s)(46.8MiB/2008msec); 0 zone resets 00:21:33.694 slat (usec): min=2, max=205, avg= 2.82, stdev= 2.31 00:21:33.694 clat (usec): min=2126, max=17004, avg=9944.76, stdev=903.65 00:21:33.694 lat (usec): min=2136, max=17007, avg=9947.58, stdev=903.53 00:21:33.694 clat percentiles (usec): 00:21:33.694 | 1.00th=[ 7898], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:21:33.694 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:21:33.694 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11338], 00:21:33.694 | 99.00th=[11863], 99.50th=[12256], 99.90th=[13829], 99.95th=[16581], 00:21:33.694 | 99.99th=[16909] 00:21:33.694 bw ( KiB/s): min=23704, max=23912, per=100.00%, avg=23846.00, stdev=96.08, samples=4 00:21:33.694 iops : min= 5926, max= 5978, avg=5961.50, stdev=24.02, samples=4 00:21:33.694 lat (msec) : 4=0.05%, 10=30.12%, 20=69.83% 00:21:33.694 cpu : usr=73.54%, sys=20.28%, ctx=6, majf=0, minf=5 00:21:33.694 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:33.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:33.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:33.694 issued rwts: total=12001,11970,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:33.694 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:33.694 00:21:33.694 Run status group 0 (all jobs): 00:21:33.694 READ: bw=23.3MiB/s (24.5MB/s), 23.3MiB/s-23.3MiB/s (24.5MB/s-24.5MB/s), io=46.9MiB (49.2MB), run=2008-2008msec 00:21:33.694 WRITE: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=46.8MiB (49.0MB), run=2008-2008msec 00:21:33.694 08:12:44 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:33.694 08:12:44 -- host/fio.sh@74 -- # sync 00:21:33.694 08:12:44 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:33.950 08:12:45 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:34.207 08:12:45 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:34.463 08:12:45 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:34.721 08:12:45 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:35.657 08:12:46 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:35.657 08:12:46 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:35.657 08:12:46 -- host/fio.sh@86 -- # nvmftestfini 00:21:35.657 08:12:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:35.657 08:12:46 -- nvmf/common.sh@116 -- # sync 00:21:35.657 08:12:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:35.657 08:12:46 -- nvmf/common.sh@119 -- # set +e 00:21:35.657 08:12:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:35.657 08:12:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:35.657 rmmod nvme_tcp 00:21:35.657 rmmod nvme_fabrics 00:21:35.657 rmmod nvme_keyring 00:21:35.657 08:12:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:35.657 08:12:46 -- nvmf/common.sh@123 -- # set -e 00:21:35.657 08:12:46 -- nvmf/common.sh@124 -- # return 0 00:21:35.657 08:12:46 -- nvmf/common.sh@477 -- # '[' -n 94783 ']' 00:21:35.657 08:12:46 -- nvmf/common.sh@478 -- # killprocess 94783 00:21:35.657 08:12:46 -- common/autotest_common.sh@936 -- # '[' -z 94783 ']' 00:21:35.657 08:12:46 -- common/autotest_common.sh@940 -- # kill -0 94783 00:21:35.657 08:12:46 -- common/autotest_common.sh@941 -- # uname 00:21:35.657 08:12:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:35.657 08:12:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94783 00:21:35.915 killing process with pid 94783 00:21:35.915 08:12:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:35.915 08:12:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:35.915 08:12:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94783' 00:21:35.915 08:12:46 -- common/autotest_common.sh@955 -- # kill 94783 00:21:35.915 08:12:46 -- common/autotest_common.sh@960 -- # wait 94783 00:21:35.915 08:12:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:35.915 08:12:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:35.915 08:12:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:35.915 08:12:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:35.915 08:12:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:35.915 08:12:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.915 08:12:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.915 08:12:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.915 08:12:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:35.915 00:21:35.915 real 0m19.603s 00:21:35.915 user 1m25.432s 00:21:35.915 sys 0m4.457s 00:21:35.915 08:12:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:35.915 08:12:47 -- common/autotest_common.sh@10 -- # set +x 00:21:35.915 ************************************ 00:21:35.915 END TEST nvmf_fio_host 00:21:35.915 ************************************ 00:21:36.173 08:12:47 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:36.173 08:12:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:36.173 08:12:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:36.173 08:12:47 -- common/autotest_common.sh@10 -- # set +x 00:21:36.173 ************************************ 00:21:36.173 START TEST nvmf_failover 00:21:36.173 ************************************ 00:21:36.173 08:12:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:36.173 * Looking for test storage... 00:21:36.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:36.173 08:12:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:36.173 08:12:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:36.173 08:12:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:36.173 08:12:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:36.173 08:12:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:36.173 08:12:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:36.173 08:12:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:36.173 08:12:47 -- scripts/common.sh@335 -- # IFS=.-: 00:21:36.173 08:12:47 -- scripts/common.sh@335 -- # read -ra ver1 00:21:36.173 08:12:47 -- scripts/common.sh@336 -- # IFS=.-: 00:21:36.173 08:12:47 -- scripts/common.sh@336 -- # read -ra ver2 00:21:36.173 08:12:47 -- scripts/common.sh@337 -- # local 'op=<' 00:21:36.173 08:12:47 -- scripts/common.sh@339 -- # ver1_l=2 00:21:36.173 08:12:47 -- scripts/common.sh@340 -- # ver2_l=1 00:21:36.173 08:12:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:36.173 08:12:47 -- scripts/common.sh@343 -- # case "$op" in 00:21:36.173 08:12:47 -- scripts/common.sh@344 -- # : 1 00:21:36.173 08:12:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:36.173 08:12:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:36.173 08:12:47 -- scripts/common.sh@364 -- # decimal 1 00:21:36.173 08:12:47 -- scripts/common.sh@352 -- # local d=1 00:21:36.173 08:12:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:36.173 08:12:47 -- scripts/common.sh@354 -- # echo 1 00:21:36.174 08:12:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:36.174 08:12:47 -- scripts/common.sh@365 -- # decimal 2 00:21:36.174 08:12:47 -- scripts/common.sh@352 -- # local d=2 00:21:36.174 08:12:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:36.174 08:12:47 -- scripts/common.sh@354 -- # echo 2 00:21:36.174 08:12:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:36.174 08:12:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:36.174 08:12:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:36.174 08:12:47 -- scripts/common.sh@367 -- # return 0 00:21:36.174 08:12:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:36.174 08:12:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:36.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.174 --rc genhtml_branch_coverage=1 00:21:36.174 --rc genhtml_function_coverage=1 00:21:36.174 --rc genhtml_legend=1 00:21:36.174 --rc geninfo_all_blocks=1 00:21:36.174 --rc geninfo_unexecuted_blocks=1 00:21:36.174 00:21:36.174 ' 00:21:36.174 08:12:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:36.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.174 --rc genhtml_branch_coverage=1 00:21:36.174 --rc genhtml_function_coverage=1 00:21:36.174 --rc genhtml_legend=1 00:21:36.174 --rc geninfo_all_blocks=1 00:21:36.174 --rc geninfo_unexecuted_blocks=1 00:21:36.174 00:21:36.174 ' 00:21:36.174 08:12:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:36.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.174 --rc genhtml_branch_coverage=1 00:21:36.174 --rc genhtml_function_coverage=1 00:21:36.174 --rc genhtml_legend=1 00:21:36.174 --rc geninfo_all_blocks=1 00:21:36.174 --rc geninfo_unexecuted_blocks=1 00:21:36.174 00:21:36.174 ' 00:21:36.174 08:12:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:36.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:36.174 --rc genhtml_branch_coverage=1 00:21:36.174 --rc genhtml_function_coverage=1 00:21:36.174 --rc genhtml_legend=1 00:21:36.174 --rc geninfo_all_blocks=1 00:21:36.174 --rc geninfo_unexecuted_blocks=1 00:21:36.174 00:21:36.174 ' 00:21:36.174 08:12:47 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:36.174 08:12:47 -- nvmf/common.sh@7 -- # uname -s 00:21:36.174 08:12:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.174 08:12:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.174 08:12:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.174 08:12:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.174 08:12:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.174 08:12:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.174 08:12:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.174 08:12:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.174 08:12:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.174 08:12:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.174 08:12:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:21:36.174 08:12:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:21:36.174 08:12:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.174 08:12:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.174 08:12:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:36.174 08:12:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:36.174 08:12:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.174 08:12:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.174 08:12:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.174 08:12:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.174 08:12:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.174 08:12:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.174 08:12:47 -- paths/export.sh@5 -- # export PATH 00:21:36.174 08:12:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.174 08:12:47 -- nvmf/common.sh@46 -- # : 0 00:21:36.174 08:12:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:36.174 08:12:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:36.174 08:12:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:36.174 08:12:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.174 08:12:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.174 08:12:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:36.174 08:12:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:36.174 08:12:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:36.174 08:12:47 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:36.174 08:12:47 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:36.174 08:12:47 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:36.174 08:12:47 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:36.174 08:12:47 -- host/failover.sh@18 -- # nvmftestinit 00:21:36.174 08:12:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:36.174 08:12:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.174 08:12:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:36.174 08:12:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:36.174 08:12:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:36.174 08:12:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.174 08:12:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.174 08:12:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.174 08:12:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:36.174 08:12:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:36.174 08:12:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:36.174 08:12:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:36.174 08:12:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:36.174 08:12:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:36.174 08:12:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.174 08:12:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:36.174 08:12:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:36.174 08:12:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:36.174 08:12:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:36.174 08:12:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:36.174 08:12:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:36.174 08:12:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.174 08:12:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:36.174 08:12:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:36.174 08:12:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:36.174 08:12:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:36.174 08:12:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:36.433 08:12:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:36.433 Cannot find device "nvmf_tgt_br" 00:21:36.433 08:12:47 -- nvmf/common.sh@154 -- # true 00:21:36.433 08:12:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:36.433 Cannot find device "nvmf_tgt_br2" 00:21:36.433 08:12:47 -- nvmf/common.sh@155 -- # true 00:21:36.433 08:12:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:36.433 08:12:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:36.433 Cannot find device "nvmf_tgt_br" 00:21:36.433 08:12:47 -- nvmf/common.sh@157 -- # true 00:21:36.433 08:12:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:36.433 Cannot find device "nvmf_tgt_br2" 00:21:36.433 08:12:47 -- nvmf/common.sh@158 -- # true 00:21:36.433 08:12:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:36.433 08:12:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:36.433 08:12:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:36.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:36.433 08:12:47 -- nvmf/common.sh@161 -- # true 00:21:36.433 08:12:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:36.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:36.433 08:12:47 -- nvmf/common.sh@162 -- # true 00:21:36.433 08:12:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:36.433 08:12:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:36.433 08:12:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:36.433 08:12:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:36.433 08:12:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:36.433 08:12:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:36.433 08:12:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:36.433 08:12:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:36.433 08:12:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:36.433 08:12:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:36.433 08:12:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:36.433 08:12:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:36.433 08:12:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:36.433 08:12:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:36.433 08:12:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:36.433 08:12:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:36.433 08:12:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:36.691 08:12:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:36.691 08:12:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:36.691 08:12:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:36.691 08:12:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:36.691 08:12:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:36.691 08:12:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:36.691 08:12:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:36.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:21:36.691 00:21:36.691 --- 10.0.0.2 ping statistics --- 00:21:36.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.691 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:21:36.691 08:12:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:36.691 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:36.691 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:21:36.691 00:21:36.691 --- 10.0.0.3 ping statistics --- 00:21:36.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.691 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:36.691 08:12:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:36.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:21:36.691 00:21:36.691 --- 10.0.0.1 ping statistics --- 00:21:36.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.691 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:21:36.691 08:12:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.691 08:12:47 -- nvmf/common.sh@421 -- # return 0 00:21:36.691 08:12:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:36.691 08:12:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.691 08:12:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:36.691 08:12:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:36.691 08:12:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.691 08:12:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:36.691 08:12:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:36.691 08:12:47 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:36.691 08:12:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:36.691 08:12:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:36.691 08:12:47 -- common/autotest_common.sh@10 -- # set +x 00:21:36.691 08:12:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:36.691 08:12:47 -- nvmf/common.sh@469 -- # nvmfpid=95509 00:21:36.691 08:12:47 -- nvmf/common.sh@470 -- # waitforlisten 95509 00:21:36.691 08:12:47 -- common/autotest_common.sh@829 -- # '[' -z 95509 ']' 00:21:36.691 08:12:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.691 08:12:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.691 08:12:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.691 08:12:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.691 08:12:47 -- common/autotest_common.sh@10 -- # set +x 00:21:36.691 [2024-12-07 08:12:47.858884] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:36.691 [2024-12-07 08:12:47.859005] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.949 [2024-12-07 08:12:48.000032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:36.949 [2024-12-07 08:12:48.072532] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:36.949 [2024-12-07 08:12:48.072701] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.949 [2024-12-07 08:12:48.072716] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.949 [2024-12-07 08:12:48.072725] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.949 [2024-12-07 08:12:48.073275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.949 [2024-12-07 08:12:48.073519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:36.949 [2024-12-07 08:12:48.073524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.883 08:12:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.883 08:12:48 -- common/autotest_common.sh@862 -- # return 0 00:21:37.883 08:12:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:37.883 08:12:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:37.883 08:12:48 -- common/autotest_common.sh@10 -- # set +x 00:21:37.883 08:12:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.883 08:12:48 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:38.141 [2024-12-07 08:12:49.173145] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.141 08:12:49 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:38.400 Malloc0 00:21:38.400 08:12:49 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:38.659 08:12:49 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:38.659 08:12:49 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:38.918 [2024-12-07 08:12:50.118920] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.918 08:12:50 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:39.176 [2024-12-07 08:12:50.399140] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:39.176 08:12:50 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:39.435 [2024-12-07 08:12:50.663461] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:39.435 08:12:50 -- host/failover.sh@31 -- # bdevperf_pid=95625 00:21:39.435 08:12:50 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:39.435 08:12:50 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:39.435 08:12:50 -- host/failover.sh@34 -- # waitforlisten 95625 /var/tmp/bdevperf.sock 00:21:39.435 08:12:50 -- common/autotest_common.sh@829 -- # '[' -z 95625 ']' 00:21:39.435 08:12:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.435 08:12:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:39.435 08:12:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.435 08:12:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:39.435 08:12:50 -- common/autotest_common.sh@10 -- # set +x 00:21:40.811 08:12:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:40.811 08:12:51 -- common/autotest_common.sh@862 -- # return 0 00:21:40.811 08:12:51 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:40.811 NVMe0n1 00:21:40.811 08:12:52 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:41.378 00:21:41.378 08:12:52 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:41.378 08:12:52 -- host/failover.sh@39 -- # run_test_pid=95674 00:21:41.378 08:12:52 -- host/failover.sh@41 -- # sleep 1 00:21:42.314 08:12:53 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:42.572 [2024-12-07 08:12:53.611119] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.572 [2024-12-07 08:12:53.611223] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.572 [2024-12-07 08:12:53.611254] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.572 [2024-12-07 08:12:53.611264] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.572 [2024-12-07 08:12:53.611273] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.572 [2024-12-07 08:12:53.611281] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.572 [2024-12-07 08:12:53.611290] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.572 [2024-12-07 08:12:53.611298] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.572 [2024-12-07 08:12:53.611306] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.572 [2024-12-07 08:12:53.611314] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.572 [2024-12-07 08:12:53.611322] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.572 [2024-12-07 08:12:53.611330] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.572 [2024-12-07 08:12:53.611338] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.572 [2024-12-07 08:12:53.611346] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.572 [2024-12-07 08:12:53.611361] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611369] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611385] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611393] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611400] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611408] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611417] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611425] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611433] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611442] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611450] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611458] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611466] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611474] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611482] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611490] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611498] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611506] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611514] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611532] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 [2024-12-07 08:12:53.611556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1527c90 is same with the state(5) to be set 00:21:42.573 08:12:53 -- host/failover.sh@45 -- # sleep 3 00:21:45.859 08:12:56 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:45.859 00:21:45.859 08:12:56 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:46.118 [2024-12-07 08:12:57.233145] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233241] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233254] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233263] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233274] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233282] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233291] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233300] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233308] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233316] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233324] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233332] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233340] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233348] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233356] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233373] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233381] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233397] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233406] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233414] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233422] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233430] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233438] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233446] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233454] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233467] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233475] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233483] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233491] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233499] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233507] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233517] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233525] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233533] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233541] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233549] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233557] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233566] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233574] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233582] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233589] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233597] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233606] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233614] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233623] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233631] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233659] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233668] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233676] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233693] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233702] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233710] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233718] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233726] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233734] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233743] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233750] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233766] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233774] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233784] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233793] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233802] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233818] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233826] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233834] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.118 [2024-12-07 08:12:57.233842] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.119 [2024-12-07 08:12:57.233851] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.119 [2024-12-07 08:12:57.233859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.119 [2024-12-07 08:12:57.233867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.119 [2024-12-07 08:12:57.233875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.119 [2024-12-07 08:12:57.233883] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.119 [2024-12-07 08:12:57.233891] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.119 [2024-12-07 08:12:57.233898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.119 [2024-12-07 08:12:57.233906] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.119 [2024-12-07 08:12:57.233914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.119 [2024-12-07 08:12:57.233922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529380 is same with the state(5) to be set 00:21:46.119 08:12:57 -- host/failover.sh@50 -- # sleep 3 00:21:49.413 08:13:00 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:49.413 [2024-12-07 08:13:00.505688] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.413 08:13:00 -- host/failover.sh@55 -- # sleep 1 00:21:50.347 08:13:01 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:50.606 [2024-12-07 08:13:01.789267] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.606 [2024-12-07 08:13:01.789329] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.606 [2024-12-07 08:13:01.789341] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.606 [2024-12-07 08:13:01.789350] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.606 [2024-12-07 08:13:01.789358] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.606 [2024-12-07 08:13:01.789367] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.606 [2024-12-07 08:13:01.789377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.606 [2024-12-07 08:13:01.789385] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.606 [2024-12-07 08:13:01.789393] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.606 [2024-12-07 08:13:01.789401] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.606 [2024-12-07 08:13:01.789409] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.606 [2024-12-07 08:13:01.789417] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.606 [2024-12-07 08:13:01.789425] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.606 [2024-12-07 08:13:01.789434] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.606 [2024-12-07 08:13:01.789441] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789449] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789457] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789465] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789473] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789487] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789495] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789511] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789527] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789534] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789542] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789550] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789579] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789588] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789596] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789604] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789628] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789637] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789654] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789663] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789671] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789679] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789687] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789697] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789705] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789713] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789725] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789733] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789741] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789749] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789759] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789775] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789784] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 [2024-12-07 08:13:01.789792] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1529a60 is same with the state(5) to be set 00:21:50.607 08:13:01 -- host/failover.sh@59 -- # wait 95674 00:21:57.170 0 00:21:57.170 08:13:07 -- host/failover.sh@61 -- # killprocess 95625 00:21:57.170 08:13:07 -- common/autotest_common.sh@936 -- # '[' -z 95625 ']' 00:21:57.170 08:13:07 -- common/autotest_common.sh@940 -- # kill -0 95625 00:21:57.170 08:13:07 -- common/autotest_common.sh@941 -- # uname 00:21:57.170 08:13:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:57.170 08:13:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95625 00:21:57.170 killing process with pid 95625 00:21:57.170 08:13:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:57.170 08:13:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:57.170 08:13:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95625' 00:21:57.170 08:13:07 -- common/autotest_common.sh@955 -- # kill 95625 00:21:57.170 08:13:07 -- common/autotest_common.sh@960 -- # wait 95625 00:21:57.170 08:13:07 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:57.170 [2024-12-07 08:12:50.730072] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:57.170 [2024-12-07 08:12:50.730203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95625 ] 00:21:57.170 [2024-12-07 08:12:50.861827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.170 [2024-12-07 08:12:50.939624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.170 Running I/O for 15 seconds... 00:21:57.170 [2024-12-07 08:12:53.611807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.170 [2024-12-07 08:12:53.611856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.170 [2024-12-07 08:12:53.611884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.170 [2024-12-07 08:12:53.611900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.170 [2024-12-07 08:12:53.611917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.170 [2024-12-07 08:12:53.611931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.170 [2024-12-07 08:12:53.611947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:125280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.170 [2024-12-07 08:12:53.611961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.170 [2024-12-07 08:12:53.611976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.170 [2024-12-07 08:12:53.611990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.170 [2024-12-07 08:12:53.612005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:125328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.170 [2024-12-07 08:12:53.612019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.170 [2024-12-07 08:12:53.612035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.170 [2024-12-07 08:12:53.612048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.170 [2024-12-07 08:12:53.612063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:125360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.170 [2024-12-07 08:12:53.612077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.170 [2024-12-07 08:12:53.612093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.170 [2024-12-07 08:12:53.612106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.170 [2024-12-07 08:12:53.612123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.170 [2024-12-07 08:12:53.612136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.170 [2024-12-07 08:12:53.612152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.170 [2024-12-07 08:12:53.612165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.612234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.612264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.612294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.612322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.171 [2024-12-07 08:12:53.612352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.612388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.171 [2024-12-07 08:12:53.612418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.612447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.612477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.171 [2024-12-07 08:12:53.612506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.171 [2024-12-07 08:12:53.612536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.612564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:126016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.171 [2024-12-07 08:12:53.612616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.171 [2024-12-07 08:12:53.612646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.171 [2024-12-07 08:12:53.612674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.612702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.612730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.171 [2024-12-07 08:12:53.612758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.171 [2024-12-07 08:12:53.612786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.612814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.171 [2024-12-07 08:12:53.612842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.171 [2024-12-07 08:12:53.612875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.612903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.171 [2024-12-07 08:12:53.612932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.612959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.612981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.171 [2024-12-07 08:12:53.613001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.613016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.171 [2024-12-07 08:12:53.613030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.613046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.613059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.613075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.171 [2024-12-07 08:12:53.613088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.613102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.613115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.613130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.613144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.613158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.613171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.613186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.613199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.613242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:125416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.613258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.613273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.613287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.613302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.613315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.613331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.613344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.613360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.613386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.613403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.613416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.171 [2024-12-07 08:12:53.613432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.171 [2024-12-07 08:12:53.613445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.613460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.613474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.613489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.613502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.613518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.613531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.613546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.613559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.613575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.613588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.613603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.613627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.613668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.613685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.613701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.613714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.613730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.172 [2024-12-07 08:12:53.613743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.613759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.172 [2024-12-07 08:12:53.613772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.613788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.172 [2024-12-07 08:12:53.613808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.613824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.613838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.613854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.613867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.613882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.172 [2024-12-07 08:12:53.613901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.613916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.172 [2024-12-07 08:12:53.613930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.613946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.613959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.613974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.172 [2024-12-07 08:12:53.613987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.614017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.614045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.614082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.614110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.614139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.614173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.614222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.614255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.614284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.614313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.614342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.614370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.614404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.614433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.614461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.172 [2024-12-07 08:12:53.614490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.614519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.172 [2024-12-07 08:12:53.614547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.614583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.614613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.614641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.172 [2024-12-07 08:12:53.614656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.172 [2024-12-07 08:12:53.614675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.614691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.614704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.614719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.614733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.614748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.173 [2024-12-07 08:12:53.614761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.614777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.614790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.614805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.173 [2024-12-07 08:12:53.614819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.614834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.614847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.614862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.614881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.614897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.614910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.614926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.614939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.614961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.614975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.614990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.615004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.615033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.615061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.173 [2024-12-07 08:12:53.615090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.615118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.615152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.173 [2024-12-07 08:12:53.615181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.615224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.173 [2024-12-07 08:12:53.615253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.173 [2024-12-07 08:12:53.615282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.173 [2024-12-07 08:12:53.615311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.173 [2024-12-07 08:12:53.615347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.173 [2024-12-07 08:12:53.615382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.615411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.615439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.173 [2024-12-07 08:12:53.615468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.615496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.615525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.615553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.615582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.615610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.615643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.615672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.615701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.615739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.615768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.173 [2024-12-07 08:12:53.615798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b71130 is same with the state(5) to be set 00:21:57.173 [2024-12-07 08:12:53.615830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:57.173 [2024-12-07 08:12:53.615841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:57.173 [2024-12-07 08:12:53.615857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125944 len:8 PRP1 0x0 PRP2 0x0 00:21:57.173 [2024-12-07 08:12:53.615871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.173 [2024-12-07 08:12:53.615928] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b71130 was disconnected and freed. reset controller. 00:21:57.174 [2024-12-07 08:12:53.615945] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:57.174 [2024-12-07 08:12:53.616000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.174 [2024-12-07 08:12:53.616034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:53.616049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.174 [2024-12-07 08:12:53.616062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:53.616076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.174 [2024-12-07 08:12:53.616088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:53.616102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.174 [2024-12-07 08:12:53.616114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:53.616127] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:57.174 [2024-12-07 08:12:53.616181] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aeccb0 (9): Bad file descriptor 00:21:57.174 [2024-12-07 08:12:53.618615] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:57.174 [2024-12-07 08:12:53.649336] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:57.174 [2024-12-07 08:12:57.233508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.174 [2024-12-07 08:12:57.233563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.233602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.174 [2024-12-07 08:12:57.233619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.233634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.174 [2024-12-07 08:12:57.233659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.233674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.174 [2024-12-07 08:12:57.233688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.233702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aeccb0 is same with the state(5) to be set 00:21:57.174 [2024-12-07 08:12:57.234002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.174 [2024-12-07 08:12:57.234841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.174 [2024-12-07 08:12:57.234856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.234870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.234886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.234899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.234915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.234928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.234943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.234957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.234986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.234999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.175 [2024-12-07 08:12:57.235509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.175 [2024-12-07 08:12:57.235538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.175 [2024-12-07 08:12:57.235567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.175 [2024-12-07 08:12:57.235595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.175 [2024-12-07 08:12:57.235637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.175 [2024-12-07 08:12:57.235665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.175 [2024-12-07 08:12:57.235921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.175 [2024-12-07 08:12:57.235935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.175 [2024-12-07 08:12:57.235948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.235963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.176 [2024-12-07 08:12:57.235982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.235997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.176 [2024-12-07 08:12:57.236364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.176 [2024-12-07 08:12:57.236393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.176 [2024-12-07 08:12:57.236422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.176 [2024-12-07 08:12:57.236485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.176 [2024-12-07 08:12:57.236573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.176 [2024-12-07 08:12:57.236616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.176 [2024-12-07 08:12:57.236954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.236969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.236988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.237003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.176 [2024-12-07 08:12:57.237017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.237033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.176 [2024-12-07 08:12:57.237053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.237069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.176 [2024-12-07 08:12:57.237083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.237098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.176 [2024-12-07 08:12:57.237112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.237127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.176 [2024-12-07 08:12:57.237141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.237157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.176 [2024-12-07 08:12:57.237170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.176 [2024-12-07 08:12:57.237186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.176 [2024-12-07 08:12:57.237199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:12:57.237244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:12:57.237276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.177 [2024-12-07 08:12:57.237305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.177 [2024-12-07 08:12:57.237334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.177 [2024-12-07 08:12:57.237363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:12:57.237393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:12:57.237421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.177 [2024-12-07 08:12:57.237458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:12:57.237492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.177 [2024-12-07 08:12:57.237522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:12:57.237551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.177 [2024-12-07 08:12:57.237590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.177 [2024-12-07 08:12:57.237619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:12:57.237674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.177 [2024-12-07 08:12:57.237704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:12:57.237734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:12:57.237763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:12:57.237793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:12:57.237821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:12:57.237850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:12:57.237888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:12:57.237916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:12:57.237945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.237960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:12:57.237974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.238004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4bb10 is same with the state(5) to be set 00:21:57.177 [2024-12-07 08:12:57.238025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:57.177 [2024-12-07 08:12:57.238036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:57.177 [2024-12-07 08:12:57.238047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17904 len:8 PRP1 0x0 PRP2 0x0 00:21:57.177 [2024-12-07 08:12:57.238059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:12:57.238114] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b4bb10 was disconnected and freed. reset controller. 00:21:57.177 [2024-12-07 08:12:57.238132] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:57.177 [2024-12-07 08:12:57.238146] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:57.177 [2024-12-07 08:12:57.240636] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:57.177 [2024-12-07 08:12:57.240675] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aeccb0 (9): Bad file descriptor 00:21:57.177 [2024-12-07 08:12:57.271913] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:57.177 [2024-12-07 08:13:01.789899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:13:01.789969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:13:01.789998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:13:01.790015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:13:01.790031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:13:01.790046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:13:01.790061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:13:01.790075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:13:01.790112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:13:01.790127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:13:01.790142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:13:01.790156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:13:01.790171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:13:01.790184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:13:01.790218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:13:01.790235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:13:01.790251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:13:01.790264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:13:01.790279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:13:01.790293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.177 [2024-12-07 08:13:01.790309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.177 [2024-12-07 08:13:01.790322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.790976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.790990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.791006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.791020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.791035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.791048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.791063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.791076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.791091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.178 [2024-12-07 08:13:01.791105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.791120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.791133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.791148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.178 [2024-12-07 08:13:01.791161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.791176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.178 [2024-12-07 08:13:01.791189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.791204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.178 [2024-12-07 08:13:01.791246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.791263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.178 [2024-12-07 08:13:01.791285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.791302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.178 [2024-12-07 08:13:01.791316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.791332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.791345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.791361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.178 [2024-12-07 08:13:01.791375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.791391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.178 [2024-12-07 08:13:01.791404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.791419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.178 [2024-12-07 08:13:01.791433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.791448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.791462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.791477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.178 [2024-12-07 08:13:01.791491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.791507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.178 [2024-12-07 08:13:01.791520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.178 [2024-12-07 08:13:01.791536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.178 [2024-12-07 08:13:01.791550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.791566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.791594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.791609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.179 [2024-12-07 08:13:01.791623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.791638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.179 [2024-12-07 08:13:01.791651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.791672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.791687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.791702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.791715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.791730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.179 [2024-12-07 08:13:01.791743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.791759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.179 [2024-12-07 08:13:01.791772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.791787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.179 [2024-12-07 08:13:01.791800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.791816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.179 [2024-12-07 08:13:01.791829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.791844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.791857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.791872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.179 [2024-12-07 08:13:01.791886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.791902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.791915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.791931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.791944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.791959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.791979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.791995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.792008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.792023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.792036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.792058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.792072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.792087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.792100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.792115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.792128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.792143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.792156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.792172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.792184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.792199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.792213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.792238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.179 [2024-12-07 08:13:01.792254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.792269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.179 [2024-12-07 08:13:01.792282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.792297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.792310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.792325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.179 [2024-12-07 08:13:01.792339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.792354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.792366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.792381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.792395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.792409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.179 [2024-12-07 08:13:01.792430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.792445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.792464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.792480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.792493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.792508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.179 [2024-12-07 08:13:01.792521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.792536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.792550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.792565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.179 [2024-12-07 08:13:01.792578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.179 [2024-12-07 08:13:01.792593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.179 [2024-12-07 08:13:01.792607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.792622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.792635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.792650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.792663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.792678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.792691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.792706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.180 [2024-12-07 08:13:01.792719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.792734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.180 [2024-12-07 08:13:01.792747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.792762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.180 [2024-12-07 08:13:01.792776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.792797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.180 [2024-12-07 08:13:01.792811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.792826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.792839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.792854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.792867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.792882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.180 [2024-12-07 08:13:01.792894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.792910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.792927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.792946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.180 [2024-12-07 08:13:01.792960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.792975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.792988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.793016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.180 [2024-12-07 08:13:01.793044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.793072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.180 [2024-12-07 08:13:01.793100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.793129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.793163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.793202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.793234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.793263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.793291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.793319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.793348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.180 [2024-12-07 08:13:01.793377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.180 [2024-12-07 08:13:01.793406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.180 [2024-12-07 08:13:01.793439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.180 [2024-12-07 08:13:01.793468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.180 [2024-12-07 08:13:01.793496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.180 [2024-12-07 08:13:01.793524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.180 [2024-12-07 08:13:01.793560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.180 [2024-12-07 08:13:01.793589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:57.180 [2024-12-07 08:13:01.793617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.793670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.793702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.793731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.793760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.180 [2024-12-07 08:13:01.793790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.180 [2024-12-07 08:13:01.793805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.181 [2024-12-07 08:13:01.793819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.181 [2024-12-07 08:13:01.793834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:57.181 [2024-12-07 08:13:01.793848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.181 [2024-12-07 08:13:01.793862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b73210 is same with the state(5) to be set 00:21:57.181 [2024-12-07 08:13:01.793880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:57.181 [2024-12-07 08:13:01.793891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:57.181 [2024-12-07 08:13:01.793901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15152 len:8 PRP1 0x0 PRP2 0x0 00:21:57.181 [2024-12-07 08:13:01.793920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.181 [2024-12-07 08:13:01.793976] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b73210 was disconnected and freed. reset controller. 00:21:57.181 [2024-12-07 08:13:01.793993] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:57.181 [2024-12-07 08:13:01.794057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.181 [2024-12-07 08:13:01.794079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.181 [2024-12-07 08:13:01.794095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.181 [2024-12-07 08:13:01.794108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.181 [2024-12-07 08:13:01.794122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.181 [2024-12-07 08:13:01.794135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.181 [2024-12-07 08:13:01.794149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:57.181 [2024-12-07 08:13:01.794162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:57.181 [2024-12-07 08:13:01.794176] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:57.181 [2024-12-07 08:13:01.796553] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:57.181 [2024-12-07 08:13:01.796591] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aeccb0 (9): Bad file descriptor 00:21:57.181 [2024-12-07 08:13:01.826934] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:57.181 00:21:57.181 Latency(us) 00:21:57.181 [2024-12-07T08:13:08.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.181 [2024-12-07T08:13:08.457Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:57.181 Verification LBA range: start 0x0 length 0x4000 00:21:57.181 NVMe0n1 : 15.01 13954.89 54.51 317.01 0.00 8951.27 532.48 15609.48 00:21:57.181 [2024-12-07T08:13:08.457Z] =================================================================================================================== 00:21:57.181 [2024-12-07T08:13:08.457Z] Total : 13954.89 54.51 317.01 0.00 8951.27 532.48 15609.48 00:21:57.181 Received shutdown signal, test time was about 15.000000 seconds 00:21:57.181 00:21:57.181 Latency(us) 00:21:57.181 [2024-12-07T08:13:08.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.181 [2024-12-07T08:13:08.457Z] =================================================================================================================== 00:21:57.181 [2024-12-07T08:13:08.457Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:57.181 08:13:07 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:57.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:57.181 08:13:07 -- host/failover.sh@65 -- # count=3 00:21:57.181 08:13:07 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:57.181 08:13:07 -- host/failover.sh@73 -- # bdevperf_pid=95880 00:21:57.181 08:13:07 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:57.181 08:13:07 -- host/failover.sh@75 -- # waitforlisten 95880 /var/tmp/bdevperf.sock 00:21:57.181 08:13:07 -- common/autotest_common.sh@829 -- # '[' -z 95880 ']' 00:21:57.181 08:13:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:57.181 08:13:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:57.181 08:13:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:57.181 08:13:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:57.181 08:13:07 -- common/autotest_common.sh@10 -- # set +x 00:21:57.748 08:13:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:57.748 08:13:08 -- common/autotest_common.sh@862 -- # return 0 00:21:57.748 08:13:08 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:58.007 [2024-12-07 08:13:09.079576] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:58.007 08:13:09 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:58.265 [2024-12-07 08:13:09.315747] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:58.265 08:13:09 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:58.524 NVMe0n1 00:21:58.524 08:13:09 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:58.781 00:21:58.781 08:13:09 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:59.041 00:21:59.041 08:13:10 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:59.041 08:13:10 -- host/failover.sh@82 -- # grep -q NVMe0 00:21:59.300 08:13:10 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:59.559 08:13:10 -- host/failover.sh@87 -- # sleep 3 00:22:02.847 08:13:13 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:02.847 08:13:13 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:02.847 08:13:14 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:02.847 08:13:14 -- host/failover.sh@90 -- # run_test_pid=96020 00:22:02.847 08:13:14 -- host/failover.sh@92 -- # wait 96020 00:22:04.222 0 00:22:04.222 08:13:15 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:04.222 [2024-12-07 08:13:07.805498] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:04.222 [2024-12-07 08:13:07.805612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95880 ] 00:22:04.222 [2024-12-07 08:13:07.946920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.222 [2024-12-07 08:13:08.016558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.222 [2024-12-07 08:13:10.724707] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:04.222 [2024-12-07 08:13:10.724842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.222 [2024-12-07 08:13:10.724868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.222 [2024-12-07 08:13:10.724887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.222 [2024-12-07 08:13:10.724901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.222 [2024-12-07 08:13:10.724916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.222 [2024-12-07 08:13:10.724929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.222 [2024-12-07 08:13:10.724944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:04.222 [2024-12-07 08:13:10.724958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.222 [2024-12-07 08:13:10.724972] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.222 [2024-12-07 08:13:10.725024] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.222 [2024-12-07 08:13:10.725058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb53cb0 (9): Bad file descriptor 00:22:04.222 [2024-12-07 08:13:10.735937] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:04.222 Running I/O for 1 seconds... 00:22:04.222 00:22:04.222 Latency(us) 00:22:04.222 [2024-12-07T08:13:15.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.222 [2024-12-07T08:13:15.498Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:04.222 Verification LBA range: start 0x0 length 0x4000 00:22:04.222 NVMe0n1 : 1.01 14390.45 56.21 0.00 0.00 8854.04 1221.35 9889.98 00:22:04.222 [2024-12-07T08:13:15.498Z] =================================================================================================================== 00:22:04.222 [2024-12-07T08:13:15.498Z] Total : 14390.45 56.21 0.00 0.00 8854.04 1221.35 9889.98 00:22:04.222 08:13:15 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:04.222 08:13:15 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:04.222 08:13:15 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:04.480 08:13:15 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:04.480 08:13:15 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:04.737 08:13:15 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:04.996 08:13:16 -- host/failover.sh@101 -- # sleep 3 00:22:08.276 08:13:19 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:08.276 08:13:19 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:08.276 08:13:19 -- host/failover.sh@108 -- # killprocess 95880 00:22:08.276 08:13:19 -- common/autotest_common.sh@936 -- # '[' -z 95880 ']' 00:22:08.276 08:13:19 -- common/autotest_common.sh@940 -- # kill -0 95880 00:22:08.276 08:13:19 -- common/autotest_common.sh@941 -- # uname 00:22:08.276 08:13:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:08.276 08:13:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95880 00:22:08.276 killing process with pid 95880 00:22:08.276 08:13:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:08.276 08:13:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:08.276 08:13:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95880' 00:22:08.276 08:13:19 -- common/autotest_common.sh@955 -- # kill 95880 00:22:08.276 08:13:19 -- common/autotest_common.sh@960 -- # wait 95880 00:22:08.534 08:13:19 -- host/failover.sh@110 -- # sync 00:22:08.534 08:13:19 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:08.793 08:13:19 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:08.793 08:13:19 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:08.793 08:13:19 -- host/failover.sh@116 -- # nvmftestfini 00:22:08.793 08:13:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:08.793 08:13:19 -- nvmf/common.sh@116 -- # sync 00:22:08.793 08:13:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:08.793 08:13:19 -- nvmf/common.sh@119 -- # set +e 00:22:08.793 08:13:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:08.793 08:13:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:08.793 rmmod nvme_tcp 00:22:08.793 rmmod nvme_fabrics 00:22:08.793 rmmod nvme_keyring 00:22:08.793 08:13:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:08.793 08:13:20 -- nvmf/common.sh@123 -- # set -e 00:22:08.793 08:13:20 -- nvmf/common.sh@124 -- # return 0 00:22:08.793 08:13:20 -- nvmf/common.sh@477 -- # '[' -n 95509 ']' 00:22:08.793 08:13:20 -- nvmf/common.sh@478 -- # killprocess 95509 00:22:08.793 08:13:20 -- common/autotest_common.sh@936 -- # '[' -z 95509 ']' 00:22:08.793 08:13:20 -- common/autotest_common.sh@940 -- # kill -0 95509 00:22:08.793 08:13:20 -- common/autotest_common.sh@941 -- # uname 00:22:08.793 08:13:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:08.793 08:13:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95509 00:22:09.052 08:13:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:09.052 killing process with pid 95509 00:22:09.052 08:13:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:09.052 08:13:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95509' 00:22:09.052 08:13:20 -- common/autotest_common.sh@955 -- # kill 95509 00:22:09.052 08:13:20 -- common/autotest_common.sh@960 -- # wait 95509 00:22:09.052 08:13:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:09.052 08:13:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:09.052 08:13:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:09.052 08:13:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:09.052 08:13:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:09.052 08:13:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.052 08:13:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:09.052 08:13:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.052 08:13:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:09.052 00:22:09.052 real 0m33.087s 00:22:09.052 user 2m8.536s 00:22:09.052 sys 0m4.861s 00:22:09.311 08:13:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:09.311 ************************************ 00:22:09.311 END TEST nvmf_failover 00:22:09.311 ************************************ 00:22:09.311 08:13:20 -- common/autotest_common.sh@10 -- # set +x 00:22:09.311 08:13:20 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:09.311 08:13:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:09.311 08:13:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:09.311 08:13:20 -- common/autotest_common.sh@10 -- # set +x 00:22:09.311 ************************************ 00:22:09.311 START TEST nvmf_discovery 00:22:09.311 ************************************ 00:22:09.311 08:13:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:09.311 * Looking for test storage... 00:22:09.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:09.311 08:13:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:09.311 08:13:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:09.311 08:13:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:09.311 08:13:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:09.311 08:13:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:09.311 08:13:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:09.311 08:13:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:09.311 08:13:20 -- scripts/common.sh@335 -- # IFS=.-: 00:22:09.311 08:13:20 -- scripts/common.sh@335 -- # read -ra ver1 00:22:09.311 08:13:20 -- scripts/common.sh@336 -- # IFS=.-: 00:22:09.311 08:13:20 -- scripts/common.sh@336 -- # read -ra ver2 00:22:09.311 08:13:20 -- scripts/common.sh@337 -- # local 'op=<' 00:22:09.311 08:13:20 -- scripts/common.sh@339 -- # ver1_l=2 00:22:09.311 08:13:20 -- scripts/common.sh@340 -- # ver2_l=1 00:22:09.311 08:13:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:09.311 08:13:20 -- scripts/common.sh@343 -- # case "$op" in 00:22:09.311 08:13:20 -- scripts/common.sh@344 -- # : 1 00:22:09.311 08:13:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:09.311 08:13:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:09.311 08:13:20 -- scripts/common.sh@364 -- # decimal 1 00:22:09.311 08:13:20 -- scripts/common.sh@352 -- # local d=1 00:22:09.311 08:13:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:09.311 08:13:20 -- scripts/common.sh@354 -- # echo 1 00:22:09.311 08:13:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:09.311 08:13:20 -- scripts/common.sh@365 -- # decimal 2 00:22:09.311 08:13:20 -- scripts/common.sh@352 -- # local d=2 00:22:09.311 08:13:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:09.311 08:13:20 -- scripts/common.sh@354 -- # echo 2 00:22:09.311 08:13:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:09.311 08:13:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:09.311 08:13:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:09.311 08:13:20 -- scripts/common.sh@367 -- # return 0 00:22:09.311 08:13:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:09.311 08:13:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:09.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.311 --rc genhtml_branch_coverage=1 00:22:09.311 --rc genhtml_function_coverage=1 00:22:09.311 --rc genhtml_legend=1 00:22:09.311 --rc geninfo_all_blocks=1 00:22:09.311 --rc geninfo_unexecuted_blocks=1 00:22:09.311 00:22:09.311 ' 00:22:09.311 08:13:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:09.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.311 --rc genhtml_branch_coverage=1 00:22:09.311 --rc genhtml_function_coverage=1 00:22:09.311 --rc genhtml_legend=1 00:22:09.311 --rc geninfo_all_blocks=1 00:22:09.311 --rc geninfo_unexecuted_blocks=1 00:22:09.311 00:22:09.311 ' 00:22:09.311 08:13:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:09.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.311 --rc genhtml_branch_coverage=1 00:22:09.311 --rc genhtml_function_coverage=1 00:22:09.311 --rc genhtml_legend=1 00:22:09.311 --rc geninfo_all_blocks=1 00:22:09.311 --rc geninfo_unexecuted_blocks=1 00:22:09.311 00:22:09.311 ' 00:22:09.311 08:13:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:09.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.311 --rc genhtml_branch_coverage=1 00:22:09.311 --rc genhtml_function_coverage=1 00:22:09.311 --rc genhtml_legend=1 00:22:09.311 --rc geninfo_all_blocks=1 00:22:09.311 --rc geninfo_unexecuted_blocks=1 00:22:09.311 00:22:09.311 ' 00:22:09.311 08:13:20 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:09.311 08:13:20 -- nvmf/common.sh@7 -- # uname -s 00:22:09.311 08:13:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:09.311 08:13:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:09.311 08:13:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:09.311 08:13:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:09.311 08:13:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:09.311 08:13:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:09.311 08:13:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:09.311 08:13:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:09.311 08:13:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:09.311 08:13:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:09.311 08:13:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:22:09.311 08:13:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:22:09.311 08:13:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:09.311 08:13:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:09.311 08:13:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:09.311 08:13:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:09.311 08:13:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:09.311 08:13:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:09.311 08:13:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:09.312 08:13:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.312 08:13:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.312 08:13:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.312 08:13:20 -- paths/export.sh@5 -- # export PATH 00:22:09.312 08:13:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.312 08:13:20 -- nvmf/common.sh@46 -- # : 0 00:22:09.312 08:13:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:09.312 08:13:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:09.312 08:13:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:09.312 08:13:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:09.312 08:13:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:09.312 08:13:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:09.312 08:13:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:09.312 08:13:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:09.312 08:13:20 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:09.312 08:13:20 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:09.312 08:13:20 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:09.312 08:13:20 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:09.312 08:13:20 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:09.312 08:13:20 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:09.312 08:13:20 -- host/discovery.sh@25 -- # nvmftestinit 00:22:09.312 08:13:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:09.312 08:13:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:09.312 08:13:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:09.312 08:13:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:09.312 08:13:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:09.312 08:13:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.312 08:13:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:09.312 08:13:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:09.312 08:13:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:09.312 08:13:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:09.312 08:13:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:09.312 08:13:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:09.312 08:13:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:09.312 08:13:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:09.312 08:13:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.312 08:13:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.312 08:13:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:09.312 08:13:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:09.312 08:13:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:09.312 08:13:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:09.312 08:13:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:09.312 08:13:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.312 08:13:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:09.312 08:13:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:09.312 08:13:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:09.312 08:13:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:09.312 08:13:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:09.570 08:13:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:09.570 Cannot find device "nvmf_tgt_br" 00:22:09.570 08:13:20 -- nvmf/common.sh@154 -- # true 00:22:09.570 08:13:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:09.570 Cannot find device "nvmf_tgt_br2" 00:22:09.570 08:13:20 -- nvmf/common.sh@155 -- # true 00:22:09.570 08:13:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:09.570 08:13:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:09.570 Cannot find device "nvmf_tgt_br" 00:22:09.570 08:13:20 -- nvmf/common.sh@157 -- # true 00:22:09.570 08:13:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:09.570 Cannot find device "nvmf_tgt_br2" 00:22:09.570 08:13:20 -- nvmf/common.sh@158 -- # true 00:22:09.570 08:13:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:09.570 08:13:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:09.570 08:13:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:09.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:09.570 08:13:20 -- nvmf/common.sh@161 -- # true 00:22:09.570 08:13:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:09.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:09.570 08:13:20 -- nvmf/common.sh@162 -- # true 00:22:09.570 08:13:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:09.570 08:13:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:09.570 08:13:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:09.570 08:13:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:09.570 08:13:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:09.570 08:13:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:09.570 08:13:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:09.570 08:13:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:09.570 08:13:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:09.570 08:13:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:09.570 08:13:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:09.570 08:13:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:09.570 08:13:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:09.570 08:13:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:09.570 08:13:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:09.570 08:13:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:09.570 08:13:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:09.829 08:13:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:09.829 08:13:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:09.829 08:13:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:09.829 08:13:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:09.829 08:13:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:09.829 08:13:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:09.829 08:13:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:09.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:22:09.829 00:22:09.830 --- 10.0.0.2 ping statistics --- 00:22:09.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.830 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:22:09.830 08:13:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:09.830 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:09.830 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:22:09.830 00:22:09.830 --- 10.0.0.3 ping statistics --- 00:22:09.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.830 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:22:09.830 08:13:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:09.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:22:09.830 00:22:09.830 --- 10.0.0.1 ping statistics --- 00:22:09.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.830 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:22:09.830 08:13:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.830 08:13:20 -- nvmf/common.sh@421 -- # return 0 00:22:09.830 08:13:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:09.830 08:13:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.830 08:13:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:09.830 08:13:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:09.830 08:13:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.830 08:13:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:09.830 08:13:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:09.830 08:13:20 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:09.830 08:13:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:09.830 08:13:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:09.830 08:13:20 -- common/autotest_common.sh@10 -- # set +x 00:22:09.830 08:13:20 -- nvmf/common.sh@469 -- # nvmfpid=96333 00:22:09.830 08:13:20 -- nvmf/common.sh@470 -- # waitforlisten 96333 00:22:09.830 08:13:20 -- common/autotest_common.sh@829 -- # '[' -z 96333 ']' 00:22:09.830 08:13:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:09.830 08:13:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.830 08:13:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:09.830 08:13:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.830 08:13:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:09.830 08:13:20 -- common/autotest_common.sh@10 -- # set +x 00:22:09.830 [2024-12-07 08:13:21.005813] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:09.830 [2024-12-07 08:13:21.005921] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.099 [2024-12-07 08:13:21.147074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.099 [2024-12-07 08:13:21.222053] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:10.099 [2024-12-07 08:13:21.222227] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.099 [2024-12-07 08:13:21.222242] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.099 [2024-12-07 08:13:21.222251] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.099 [2024-12-07 08:13:21.222277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.052 08:13:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:11.052 08:13:21 -- common/autotest_common.sh@862 -- # return 0 00:22:11.052 08:13:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:11.052 08:13:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:11.052 08:13:21 -- common/autotest_common.sh@10 -- # set +x 00:22:11.052 08:13:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.052 08:13:21 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:11.052 08:13:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.052 08:13:21 -- common/autotest_common.sh@10 -- # set +x 00:22:11.052 [2024-12-07 08:13:22.002924] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.052 08:13:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.052 08:13:22 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:11.052 08:13:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.052 08:13:22 -- common/autotest_common.sh@10 -- # set +x 00:22:11.052 [2024-12-07 08:13:22.011063] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:11.052 08:13:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.052 08:13:22 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:11.052 08:13:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.052 08:13:22 -- common/autotest_common.sh@10 -- # set +x 00:22:11.052 null0 00:22:11.052 08:13:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.052 08:13:22 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:11.052 08:13:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.052 08:13:22 -- common/autotest_common.sh@10 -- # set +x 00:22:11.052 null1 00:22:11.052 08:13:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.052 08:13:22 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:11.052 08:13:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.052 08:13:22 -- common/autotest_common.sh@10 -- # set +x 00:22:11.052 08:13:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.052 08:13:22 -- host/discovery.sh@45 -- # hostpid=96383 00:22:11.052 08:13:22 -- host/discovery.sh@46 -- # waitforlisten 96383 /tmp/host.sock 00:22:11.052 08:13:22 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:11.052 08:13:22 -- common/autotest_common.sh@829 -- # '[' -z 96383 ']' 00:22:11.052 08:13:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:11.052 08:13:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.052 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:11.052 08:13:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:11.052 08:13:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.052 08:13:22 -- common/autotest_common.sh@10 -- # set +x 00:22:11.052 [2024-12-07 08:13:22.097403] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:11.052 [2024-12-07 08:13:22.097511] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96383 ] 00:22:11.052 [2024-12-07 08:13:22.240741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.052 [2024-12-07 08:13:22.318522] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:11.052 [2024-12-07 08:13:22.319004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.991 08:13:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:11.991 08:13:22 -- common/autotest_common.sh@862 -- # return 0 00:22:11.991 08:13:22 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:11.991 08:13:22 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:11.991 08:13:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.991 08:13:22 -- common/autotest_common.sh@10 -- # set +x 00:22:11.991 08:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.991 08:13:23 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:11.991 08:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.991 08:13:23 -- common/autotest_common.sh@10 -- # set +x 00:22:11.991 08:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.991 08:13:23 -- host/discovery.sh@72 -- # notify_id=0 00:22:11.991 08:13:23 -- host/discovery.sh@78 -- # get_subsystem_names 00:22:11.991 08:13:23 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:11.991 08:13:23 -- host/discovery.sh@59 -- # xargs 00:22:11.991 08:13:23 -- host/discovery.sh@59 -- # sort 00:22:11.991 08:13:23 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:11.991 08:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.991 08:13:23 -- common/autotest_common.sh@10 -- # set +x 00:22:11.991 08:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.991 08:13:23 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:22:11.991 08:13:23 -- host/discovery.sh@79 -- # get_bdev_list 00:22:11.991 08:13:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:11.991 08:13:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.991 08:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.991 08:13:23 -- common/autotest_common.sh@10 -- # set +x 00:22:11.991 08:13:23 -- host/discovery.sh@55 -- # sort 00:22:11.991 08:13:23 -- host/discovery.sh@55 -- # xargs 00:22:11.991 08:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.991 08:13:23 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:22:11.991 08:13:23 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:11.991 08:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.991 08:13:23 -- common/autotest_common.sh@10 -- # set +x 00:22:11.991 08:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.991 08:13:23 -- host/discovery.sh@82 -- # get_subsystem_names 00:22:11.991 08:13:23 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:11.991 08:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.991 08:13:23 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:11.991 08:13:23 -- common/autotest_common.sh@10 -- # set +x 00:22:11.991 08:13:23 -- host/discovery.sh@59 -- # sort 00:22:11.991 08:13:23 -- host/discovery.sh@59 -- # xargs 00:22:11.991 08:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.991 08:13:23 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:22:11.991 08:13:23 -- host/discovery.sh@83 -- # get_bdev_list 00:22:11.991 08:13:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:11.991 08:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.991 08:13:23 -- common/autotest_common.sh@10 -- # set +x 00:22:11.991 08:13:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:11.991 08:13:23 -- host/discovery.sh@55 -- # sort 00:22:11.991 08:13:23 -- host/discovery.sh@55 -- # xargs 00:22:11.991 08:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.991 08:13:23 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:11.991 08:13:23 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:11.991 08:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.991 08:13:23 -- common/autotest_common.sh@10 -- # set +x 00:22:11.991 08:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.991 08:13:23 -- host/discovery.sh@86 -- # get_subsystem_names 00:22:11.991 08:13:23 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:11.991 08:13:23 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:11.991 08:13:23 -- host/discovery.sh@59 -- # sort 00:22:11.991 08:13:23 -- host/discovery.sh@59 -- # xargs 00:22:11.991 08:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.991 08:13:23 -- common/autotest_common.sh@10 -- # set +x 00:22:11.991 08:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.250 08:13:23 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:22:12.250 08:13:23 -- host/discovery.sh@87 -- # get_bdev_list 00:22:12.250 08:13:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.250 08:13:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:12.250 08:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.250 08:13:23 -- host/discovery.sh@55 -- # sort 00:22:12.250 08:13:23 -- common/autotest_common.sh@10 -- # set +x 00:22:12.250 08:13:23 -- host/discovery.sh@55 -- # xargs 00:22:12.250 08:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.250 08:13:23 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:12.250 08:13:23 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:12.250 08:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.250 08:13:23 -- common/autotest_common.sh@10 -- # set +x 00:22:12.250 [2024-12-07 08:13:23.355416] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.250 08:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.250 08:13:23 -- host/discovery.sh@92 -- # get_subsystem_names 00:22:12.250 08:13:23 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:12.250 08:13:23 -- host/discovery.sh@59 -- # sort 00:22:12.250 08:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.250 08:13:23 -- common/autotest_common.sh@10 -- # set +x 00:22:12.250 08:13:23 -- host/discovery.sh@59 -- # xargs 00:22:12.250 08:13:23 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:12.250 08:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.250 08:13:23 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:12.250 08:13:23 -- host/discovery.sh@93 -- # get_bdev_list 00:22:12.250 08:13:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.250 08:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.250 08:13:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:12.250 08:13:23 -- common/autotest_common.sh@10 -- # set +x 00:22:12.250 08:13:23 -- host/discovery.sh@55 -- # sort 00:22:12.250 08:13:23 -- host/discovery.sh@55 -- # xargs 00:22:12.250 08:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.250 08:13:23 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:22:12.250 08:13:23 -- host/discovery.sh@94 -- # get_notification_count 00:22:12.250 08:13:23 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:12.250 08:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.250 08:13:23 -- common/autotest_common.sh@10 -- # set +x 00:22:12.250 08:13:23 -- host/discovery.sh@74 -- # jq '. | length' 00:22:12.250 08:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.250 08:13:23 -- host/discovery.sh@74 -- # notification_count=0 00:22:12.250 08:13:23 -- host/discovery.sh@75 -- # notify_id=0 00:22:12.250 08:13:23 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:22:12.250 08:13:23 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:12.250 08:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.250 08:13:23 -- common/autotest_common.sh@10 -- # set +x 00:22:12.250 08:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.250 08:13:23 -- host/discovery.sh@100 -- # sleep 1 00:22:12.818 [2024-12-07 08:13:24.016480] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:12.818 [2024-12-07 08:13:24.016514] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:12.818 [2024-12-07 08:13:24.016534] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:13.075 [2024-12-07 08:13:24.102618] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:13.075 [2024-12-07 08:13:24.158475] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:13.075 [2024-12-07 08:13:24.158503] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:13.333 08:13:24 -- host/discovery.sh@101 -- # get_subsystem_names 00:22:13.333 08:13:24 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:13.333 08:13:24 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:13.333 08:13:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.333 08:13:24 -- common/autotest_common.sh@10 -- # set +x 00:22:13.333 08:13:24 -- host/discovery.sh@59 -- # sort 00:22:13.333 08:13:24 -- host/discovery.sh@59 -- # xargs 00:22:13.333 08:13:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.333 08:13:24 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.333 08:13:24 -- host/discovery.sh@102 -- # get_bdev_list 00:22:13.333 08:13:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:13.333 08:13:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.333 08:13:24 -- common/autotest_common.sh@10 -- # set +x 00:22:13.333 08:13:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:13.333 08:13:24 -- host/discovery.sh@55 -- # xargs 00:22:13.333 08:13:24 -- host/discovery.sh@55 -- # sort 00:22:13.591 08:13:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.591 08:13:24 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:13.591 08:13:24 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:22:13.591 08:13:24 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:13.591 08:13:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.591 08:13:24 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:13.591 08:13:24 -- common/autotest_common.sh@10 -- # set +x 00:22:13.591 08:13:24 -- host/discovery.sh@63 -- # sort -n 00:22:13.591 08:13:24 -- host/discovery.sh@63 -- # xargs 00:22:13.591 08:13:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.591 08:13:24 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:22:13.591 08:13:24 -- host/discovery.sh@104 -- # get_notification_count 00:22:13.591 08:13:24 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:13.591 08:13:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.591 08:13:24 -- host/discovery.sh@74 -- # jq '. | length' 00:22:13.591 08:13:24 -- common/autotest_common.sh@10 -- # set +x 00:22:13.591 08:13:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.591 08:13:24 -- host/discovery.sh@74 -- # notification_count=1 00:22:13.591 08:13:24 -- host/discovery.sh@75 -- # notify_id=1 00:22:13.591 08:13:24 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:22:13.591 08:13:24 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:13.591 08:13:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.591 08:13:24 -- common/autotest_common.sh@10 -- # set +x 00:22:13.591 08:13:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.591 08:13:24 -- host/discovery.sh@109 -- # sleep 1 00:22:14.523 08:13:25 -- host/discovery.sh@110 -- # get_bdev_list 00:22:14.523 08:13:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:14.523 08:13:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:14.523 08:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.523 08:13:25 -- common/autotest_common.sh@10 -- # set +x 00:22:14.523 08:13:25 -- host/discovery.sh@55 -- # sort 00:22:14.523 08:13:25 -- host/discovery.sh@55 -- # xargs 00:22:14.523 08:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.781 08:13:25 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:14.781 08:13:25 -- host/discovery.sh@111 -- # get_notification_count 00:22:14.781 08:13:25 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:14.781 08:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.781 08:13:25 -- common/autotest_common.sh@10 -- # set +x 00:22:14.781 08:13:25 -- host/discovery.sh@74 -- # jq '. | length' 00:22:14.781 08:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.781 08:13:25 -- host/discovery.sh@74 -- # notification_count=1 00:22:14.781 08:13:25 -- host/discovery.sh@75 -- # notify_id=2 00:22:14.781 08:13:25 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:22:14.781 08:13:25 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:14.781 08:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.781 08:13:25 -- common/autotest_common.sh@10 -- # set +x 00:22:14.781 [2024-12-07 08:13:25.872694] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:14.781 [2024-12-07 08:13:25.873424] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:14.781 [2024-12-07 08:13:25.873477] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:14.781 08:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.781 08:13:25 -- host/discovery.sh@117 -- # sleep 1 00:22:14.781 [2024-12-07 08:13:25.959421] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:14.781 [2024-12-07 08:13:26.019687] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:14.781 [2024-12-07 08:13:26.019710] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:14.781 [2024-12-07 08:13:26.019732] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:15.713 08:13:26 -- host/discovery.sh@118 -- # get_subsystem_names 00:22:15.713 08:13:26 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:15.713 08:13:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.713 08:13:26 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:15.713 08:13:26 -- common/autotest_common.sh@10 -- # set +x 00:22:15.713 08:13:26 -- host/discovery.sh@59 -- # sort 00:22:15.713 08:13:26 -- host/discovery.sh@59 -- # xargs 00:22:15.713 08:13:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.713 08:13:26 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.713 08:13:26 -- host/discovery.sh@119 -- # get_bdev_list 00:22:15.713 08:13:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:15.713 08:13:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:15.713 08:13:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.713 08:13:26 -- host/discovery.sh@55 -- # sort 00:22:15.713 08:13:26 -- common/autotest_common.sh@10 -- # set +x 00:22:15.713 08:13:26 -- host/discovery.sh@55 -- # xargs 00:22:15.713 08:13:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.971 08:13:26 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:15.971 08:13:26 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:22:15.971 08:13:26 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:15.971 08:13:26 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:15.971 08:13:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.971 08:13:26 -- common/autotest_common.sh@10 -- # set +x 00:22:15.971 08:13:26 -- host/discovery.sh@63 -- # sort -n 00:22:15.971 08:13:26 -- host/discovery.sh@63 -- # xargs 00:22:15.971 08:13:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.971 08:13:27 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:15.971 08:13:27 -- host/discovery.sh@121 -- # get_notification_count 00:22:15.971 08:13:27 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:15.971 08:13:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.971 08:13:27 -- common/autotest_common.sh@10 -- # set +x 00:22:15.971 08:13:27 -- host/discovery.sh@74 -- # jq '. | length' 00:22:15.971 08:13:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.971 08:13:27 -- host/discovery.sh@74 -- # notification_count=0 00:22:15.971 08:13:27 -- host/discovery.sh@75 -- # notify_id=2 00:22:15.971 08:13:27 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:22:15.971 08:13:27 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:15.971 08:13:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.971 08:13:27 -- common/autotest_common.sh@10 -- # set +x 00:22:15.971 [2024-12-07 08:13:27.097716] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:15.971 [2024-12-07 08:13:27.097764] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:15.971 08:13:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.971 08:13:27 -- host/discovery.sh@127 -- # sleep 1 00:22:15.971 [2024-12-07 08:13:27.106239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.971 [2024-12-07 08:13:27.106685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.971 [2024-12-07 08:13:27.106794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.971 [2024-12-07 08:13:27.106884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.971 [2024-12-07 08:13:27.106944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.971 [2024-12-07 08:13:27.107010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.971 [2024-12-07 08:13:27.107074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.971 [2024-12-07 08:13:27.107140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.971 [2024-12-07 08:13:27.107216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe37570 is same with the state(5) to be set 00:22:15.971 [2024-12-07 08:13:27.116181] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe37570 (9): Bad file descriptor 00:22:15.971 [2024-12-07 08:13:27.126197] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:15.971 [2024-12-07 08:13:27.126405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.971 [2024-12-07 08:13:27.126587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.971 [2024-12-07 08:13:27.126671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe37570 with addr=10.0.0.2, port=4420 00:22:15.971 [2024-12-07 08:13:27.126741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe37570 is same with the state(5) to be set 00:22:15.971 [2024-12-07 08:13:27.126810] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe37570 (9): Bad file descriptor 00:22:15.971 [2024-12-07 08:13:27.126892] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:15.971 [2024-12-07 08:13:27.126965] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:15.971 [2024-12-07 08:13:27.127029] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:15.971 [2024-12-07 08:13:27.127100] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.971 [2024-12-07 08:13:27.136348] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:15.971 [2024-12-07 08:13:27.136512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.971 [2024-12-07 08:13:27.136687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.971 [2024-12-07 08:13:27.136764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe37570 with addr=10.0.0.2, port=4420 00:22:15.971 [2024-12-07 08:13:27.136827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe37570 is same with the state(5) to be set 00:22:15.971 [2024-12-07 08:13:27.136900] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe37570 (9): Bad file descriptor 00:22:15.971 [2024-12-07 08:13:27.137008] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:15.971 [2024-12-07 08:13:27.137075] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:15.971 [2024-12-07 08:13:27.137148] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:15.971 [2024-12-07 08:13:27.137270] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.971 [2024-12-07 08:13:27.146477] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:15.971 [2024-12-07 08:13:27.146649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.971 [2024-12-07 08:13:27.146802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.971 [2024-12-07 08:13:27.146824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe37570 with addr=10.0.0.2, port=4420 00:22:15.971 [2024-12-07 08:13:27.146835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe37570 is same with the state(5) to be set 00:22:15.971 [2024-12-07 08:13:27.146852] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe37570 (9): Bad file descriptor 00:22:15.971 [2024-12-07 08:13:27.146866] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:15.971 [2024-12-07 08:13:27.146873] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:15.971 [2024-12-07 08:13:27.146882] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:15.971 [2024-12-07 08:13:27.146897] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.971 [2024-12-07 08:13:27.156623] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:15.971 [2024-12-07 08:13:27.156712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.971 [2024-12-07 08:13:27.156754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.972 [2024-12-07 08:13:27.156768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe37570 with addr=10.0.0.2, port=4420 00:22:15.972 [2024-12-07 08:13:27.156777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe37570 is same with the state(5) to be set 00:22:15.972 [2024-12-07 08:13:27.156792] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe37570 (9): Bad file descriptor 00:22:15.972 [2024-12-07 08:13:27.156805] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:15.972 [2024-12-07 08:13:27.156812] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:15.972 [2024-12-07 08:13:27.156820] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:15.972 [2024-12-07 08:13:27.156833] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.972 [2024-12-07 08:13:27.166681] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:15.972 [2024-12-07 08:13:27.166762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.972 [2024-12-07 08:13:27.166802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.972 [2024-12-07 08:13:27.166816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe37570 with addr=10.0.0.2, port=4420 00:22:15.972 [2024-12-07 08:13:27.166824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe37570 is same with the state(5) to be set 00:22:15.972 [2024-12-07 08:13:27.166837] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe37570 (9): Bad file descriptor 00:22:15.972 [2024-12-07 08:13:27.166849] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:15.972 [2024-12-07 08:13:27.166855] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:15.972 [2024-12-07 08:13:27.166863] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:15.972 [2024-12-07 08:13:27.166875] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.972 [2024-12-07 08:13:27.176736] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:15.972 [2024-12-07 08:13:27.176822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.972 [2024-12-07 08:13:27.176864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.972 [2024-12-07 08:13:27.176878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe37570 with addr=10.0.0.2, port=4420 00:22:15.972 [2024-12-07 08:13:27.176887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe37570 is same with the state(5) to be set 00:22:15.972 [2024-12-07 08:13:27.176900] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe37570 (9): Bad file descriptor 00:22:15.972 [2024-12-07 08:13:27.176912] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:15.972 [2024-12-07 08:13:27.176919] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:15.972 [2024-12-07 08:13:27.176927] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:15.972 [2024-12-07 08:13:27.176939] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:15.972 [2024-12-07 08:13:27.183972] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:15.972 [2024-12-07 08:13:27.184013] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:16.907 08:13:28 -- host/discovery.sh@128 -- # get_subsystem_names 00:22:16.907 08:13:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:16.907 08:13:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.907 08:13:28 -- common/autotest_common.sh@10 -- # set +x 00:22:16.907 08:13:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:16.907 08:13:28 -- host/discovery.sh@59 -- # sort 00:22:16.907 08:13:28 -- host/discovery.sh@59 -- # xargs 00:22:16.907 08:13:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.907 08:13:28 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.907 08:13:28 -- host/discovery.sh@129 -- # get_bdev_list 00:22:16.907 08:13:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:16.907 08:13:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:16.907 08:13:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.907 08:13:28 -- host/discovery.sh@55 -- # sort 00:22:16.907 08:13:28 -- common/autotest_common.sh@10 -- # set +x 00:22:16.907 08:13:28 -- host/discovery.sh@55 -- # xargs 00:22:17.164 08:13:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.164 08:13:28 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:17.164 08:13:28 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:22:17.164 08:13:28 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:17.164 08:13:28 -- host/discovery.sh@63 -- # sort -n 00:22:17.164 08:13:28 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:17.164 08:13:28 -- host/discovery.sh@63 -- # xargs 00:22:17.164 08:13:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.164 08:13:28 -- common/autotest_common.sh@10 -- # set +x 00:22:17.164 08:13:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.164 08:13:28 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:22:17.164 08:13:28 -- host/discovery.sh@131 -- # get_notification_count 00:22:17.164 08:13:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:17.164 08:13:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.164 08:13:28 -- host/discovery.sh@74 -- # jq '. | length' 00:22:17.164 08:13:28 -- common/autotest_common.sh@10 -- # set +x 00:22:17.164 08:13:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.164 08:13:28 -- host/discovery.sh@74 -- # notification_count=0 00:22:17.164 08:13:28 -- host/discovery.sh@75 -- # notify_id=2 00:22:17.164 08:13:28 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:22:17.164 08:13:28 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:17.164 08:13:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.164 08:13:28 -- common/autotest_common.sh@10 -- # set +x 00:22:17.164 08:13:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.164 08:13:28 -- host/discovery.sh@135 -- # sleep 1 00:22:18.098 08:13:29 -- host/discovery.sh@136 -- # get_subsystem_names 00:22:18.098 08:13:29 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:18.098 08:13:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.098 08:13:29 -- common/autotest_common.sh@10 -- # set +x 00:22:18.098 08:13:29 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:18.098 08:13:29 -- host/discovery.sh@59 -- # sort 00:22:18.098 08:13:29 -- host/discovery.sh@59 -- # xargs 00:22:18.098 08:13:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.356 08:13:29 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:22:18.356 08:13:29 -- host/discovery.sh@137 -- # get_bdev_list 00:22:18.356 08:13:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:18.356 08:13:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.356 08:13:29 -- common/autotest_common.sh@10 -- # set +x 00:22:18.356 08:13:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:18.356 08:13:29 -- host/discovery.sh@55 -- # sort 00:22:18.356 08:13:29 -- host/discovery.sh@55 -- # xargs 00:22:18.356 08:13:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.356 08:13:29 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:22:18.356 08:13:29 -- host/discovery.sh@138 -- # get_notification_count 00:22:18.356 08:13:29 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:18.356 08:13:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.356 08:13:29 -- host/discovery.sh@74 -- # jq '. | length' 00:22:18.356 08:13:29 -- common/autotest_common.sh@10 -- # set +x 00:22:18.356 08:13:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.356 08:13:29 -- host/discovery.sh@74 -- # notification_count=2 00:22:18.356 08:13:29 -- host/discovery.sh@75 -- # notify_id=4 00:22:18.356 08:13:29 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:22:18.356 08:13:29 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:18.356 08:13:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.356 08:13:29 -- common/autotest_common.sh@10 -- # set +x 00:22:19.294 [2024-12-07 08:13:30.517743] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:19.294 [2024-12-07 08:13:30.517768] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:19.294 [2024-12-07 08:13:30.517801] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:19.553 [2024-12-07 08:13:30.603826] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:19.553 [2024-12-07 08:13:30.662938] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:19.553 [2024-12-07 08:13:30.662990] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:19.553 08:13:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.553 08:13:30 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:19.553 08:13:30 -- common/autotest_common.sh@650 -- # local es=0 00:22:19.553 08:13:30 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:19.553 08:13:30 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:19.553 08:13:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.553 08:13:30 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:19.553 08:13:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.553 08:13:30 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:19.553 08:13:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.553 08:13:30 -- common/autotest_common.sh@10 -- # set +x 00:22:19.553 2024/12/07 08:13:30 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:19.553 request: 00:22:19.553 { 00:22:19.553 "method": "bdev_nvme_start_discovery", 00:22:19.553 "params": { 00:22:19.553 "name": "nvme", 00:22:19.553 "trtype": "tcp", 00:22:19.553 "traddr": "10.0.0.2", 00:22:19.553 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:19.553 "adrfam": "ipv4", 00:22:19.553 "trsvcid": "8009", 00:22:19.553 "wait_for_attach": true 00:22:19.553 } 00:22:19.553 } 00:22:19.553 Got JSON-RPC error response 00:22:19.553 GoRPCClient: error on JSON-RPC call 00:22:19.553 08:13:30 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:19.553 08:13:30 -- common/autotest_common.sh@653 -- # es=1 00:22:19.553 08:13:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:19.553 08:13:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:19.553 08:13:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:19.553 08:13:30 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:22:19.553 08:13:30 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:19.553 08:13:30 -- host/discovery.sh@67 -- # sort 00:22:19.553 08:13:30 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:19.553 08:13:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.553 08:13:30 -- common/autotest_common.sh@10 -- # set +x 00:22:19.553 08:13:30 -- host/discovery.sh@67 -- # xargs 00:22:19.553 08:13:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.553 08:13:30 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:22:19.553 08:13:30 -- host/discovery.sh@147 -- # get_bdev_list 00:22:19.554 08:13:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:19.554 08:13:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:19.554 08:13:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.554 08:13:30 -- host/discovery.sh@55 -- # sort 00:22:19.554 08:13:30 -- host/discovery.sh@55 -- # xargs 00:22:19.554 08:13:30 -- common/autotest_common.sh@10 -- # set +x 00:22:19.554 08:13:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.554 08:13:30 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:19.554 08:13:30 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:19.554 08:13:30 -- common/autotest_common.sh@650 -- # local es=0 00:22:19.554 08:13:30 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:19.554 08:13:30 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:19.554 08:13:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.554 08:13:30 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:19.554 08:13:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.554 08:13:30 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:19.554 08:13:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.554 08:13:30 -- common/autotest_common.sh@10 -- # set +x 00:22:19.554 2024/12/07 08:13:30 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:19.554 request: 00:22:19.554 { 00:22:19.554 "method": "bdev_nvme_start_discovery", 00:22:19.554 "params": { 00:22:19.554 "name": "nvme_second", 00:22:19.554 "trtype": "tcp", 00:22:19.554 "traddr": "10.0.0.2", 00:22:19.554 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:19.554 "adrfam": "ipv4", 00:22:19.554 "trsvcid": "8009", 00:22:19.554 "wait_for_attach": true 00:22:19.554 } 00:22:19.554 } 00:22:19.554 Got JSON-RPC error response 00:22:19.554 GoRPCClient: error on JSON-RPC call 00:22:19.554 08:13:30 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:19.554 08:13:30 -- common/autotest_common.sh@653 -- # es=1 00:22:19.554 08:13:30 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:19.554 08:13:30 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:19.554 08:13:30 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:19.554 08:13:30 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:22:19.554 08:13:30 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:19.554 08:13:30 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:19.554 08:13:30 -- host/discovery.sh@67 -- # sort 00:22:19.554 08:13:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.554 08:13:30 -- common/autotest_common.sh@10 -- # set +x 00:22:19.554 08:13:30 -- host/discovery.sh@67 -- # xargs 00:22:19.813 08:13:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.813 08:13:30 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:22:19.813 08:13:30 -- host/discovery.sh@153 -- # get_bdev_list 00:22:19.813 08:13:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:19.813 08:13:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:19.813 08:13:30 -- host/discovery.sh@55 -- # xargs 00:22:19.813 08:13:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.813 08:13:30 -- common/autotest_common.sh@10 -- # set +x 00:22:19.813 08:13:30 -- host/discovery.sh@55 -- # sort 00:22:19.813 08:13:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.813 08:13:30 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:19.813 08:13:30 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:19.813 08:13:30 -- common/autotest_common.sh@650 -- # local es=0 00:22:19.813 08:13:30 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:19.813 08:13:30 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:19.813 08:13:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.813 08:13:30 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:19.813 08:13:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:19.813 08:13:30 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:19.813 08:13:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.813 08:13:30 -- common/autotest_common.sh@10 -- # set +x 00:22:20.749 [2024-12-07 08:13:31.929086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:20.749 [2024-12-07 08:13:31.929180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:20.749 [2024-12-07 08:13:31.929197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2f80 with addr=10.0.0.2, port=8010 00:22:20.749 [2024-12-07 08:13:31.929244] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:20.749 [2024-12-07 08:13:31.929255] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:20.749 [2024-12-07 08:13:31.929264] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:21.685 [2024-12-07 08:13:32.929075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.685 [2024-12-07 08:13:32.929166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.685 [2024-12-07 08:13:32.929183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeabca0 with addr=10.0.0.2, port=8010 00:22:21.685 [2024-12-07 08:13:32.929203] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:21.685 [2024-12-07 08:13:32.929221] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:21.685 [2024-12-07 08:13:32.929247] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:23.065 [2024-12-07 08:13:33.928959] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:23.065 2024/12/07 08:13:33 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:22:23.065 request: 00:22:23.065 { 00:22:23.065 "method": "bdev_nvme_start_discovery", 00:22:23.065 "params": { 00:22:23.065 "name": "nvme_second", 00:22:23.065 "trtype": "tcp", 00:22:23.065 "traddr": "10.0.0.2", 00:22:23.065 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:23.065 "adrfam": "ipv4", 00:22:23.065 "trsvcid": "8010", 00:22:23.066 "attach_timeout_ms": 3000 00:22:23.066 } 00:22:23.066 } 00:22:23.066 Got JSON-RPC error response 00:22:23.066 GoRPCClient: error on JSON-RPC call 00:22:23.066 08:13:33 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:23.066 08:13:33 -- common/autotest_common.sh@653 -- # es=1 00:22:23.066 08:13:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:23.066 08:13:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:23.066 08:13:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:23.066 08:13:33 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:22:23.066 08:13:33 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:23.066 08:13:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.066 08:13:33 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:23.066 08:13:33 -- common/autotest_common.sh@10 -- # set +x 00:22:23.066 08:13:33 -- host/discovery.sh@67 -- # sort 00:22:23.066 08:13:33 -- host/discovery.sh@67 -- # xargs 00:22:23.066 08:13:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.066 08:13:33 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:22:23.066 08:13:33 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:22:23.066 08:13:33 -- host/discovery.sh@162 -- # kill 96383 00:22:23.066 08:13:33 -- host/discovery.sh@163 -- # nvmftestfini 00:22:23.066 08:13:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:23.066 08:13:33 -- nvmf/common.sh@116 -- # sync 00:22:23.066 08:13:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:23.066 08:13:34 -- nvmf/common.sh@119 -- # set +e 00:22:23.066 08:13:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:23.066 08:13:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:23.066 rmmod nvme_tcp 00:22:23.066 rmmod nvme_fabrics 00:22:23.066 rmmod nvme_keyring 00:22:23.066 08:13:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:23.066 08:13:34 -- nvmf/common.sh@123 -- # set -e 00:22:23.066 08:13:34 -- nvmf/common.sh@124 -- # return 0 00:22:23.066 08:13:34 -- nvmf/common.sh@477 -- # '[' -n 96333 ']' 00:22:23.066 08:13:34 -- nvmf/common.sh@478 -- # killprocess 96333 00:22:23.066 08:13:34 -- common/autotest_common.sh@936 -- # '[' -z 96333 ']' 00:22:23.066 08:13:34 -- common/autotest_common.sh@940 -- # kill -0 96333 00:22:23.066 08:13:34 -- common/autotest_common.sh@941 -- # uname 00:22:23.066 08:13:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:23.066 08:13:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96333 00:22:23.066 08:13:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:23.066 killing process with pid 96333 00:22:23.066 08:13:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:23.066 08:13:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96333' 00:22:23.066 08:13:34 -- common/autotest_common.sh@955 -- # kill 96333 00:22:23.066 08:13:34 -- common/autotest_common.sh@960 -- # wait 96333 00:22:23.066 08:13:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:23.066 08:13:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:23.066 08:13:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:23.066 08:13:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.066 08:13:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:23.066 08:13:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.066 08:13:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.066 08:13:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.325 08:13:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:23.325 00:22:23.325 real 0m13.988s 00:22:23.325 user 0m27.209s 00:22:23.325 sys 0m1.696s 00:22:23.325 08:13:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:23.325 08:13:34 -- common/autotest_common.sh@10 -- # set +x 00:22:23.325 ************************************ 00:22:23.325 END TEST nvmf_discovery 00:22:23.325 ************************************ 00:22:23.325 08:13:34 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:23.325 08:13:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:23.325 08:13:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:23.325 08:13:34 -- common/autotest_common.sh@10 -- # set +x 00:22:23.325 ************************************ 00:22:23.325 START TEST nvmf_discovery_remove_ifc 00:22:23.325 ************************************ 00:22:23.325 08:13:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:23.325 * Looking for test storage... 00:22:23.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:23.325 08:13:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:23.325 08:13:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:23.325 08:13:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:23.325 08:13:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:23.325 08:13:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:23.325 08:13:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:23.325 08:13:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:23.325 08:13:34 -- scripts/common.sh@335 -- # IFS=.-: 00:22:23.325 08:13:34 -- scripts/common.sh@335 -- # read -ra ver1 00:22:23.325 08:13:34 -- scripts/common.sh@336 -- # IFS=.-: 00:22:23.325 08:13:34 -- scripts/common.sh@336 -- # read -ra ver2 00:22:23.325 08:13:34 -- scripts/common.sh@337 -- # local 'op=<' 00:22:23.325 08:13:34 -- scripts/common.sh@339 -- # ver1_l=2 00:22:23.325 08:13:34 -- scripts/common.sh@340 -- # ver2_l=1 00:22:23.325 08:13:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:23.325 08:13:34 -- scripts/common.sh@343 -- # case "$op" in 00:22:23.325 08:13:34 -- scripts/common.sh@344 -- # : 1 00:22:23.325 08:13:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:23.325 08:13:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:23.325 08:13:34 -- scripts/common.sh@364 -- # decimal 1 00:22:23.325 08:13:34 -- scripts/common.sh@352 -- # local d=1 00:22:23.325 08:13:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:23.325 08:13:34 -- scripts/common.sh@354 -- # echo 1 00:22:23.325 08:13:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:23.325 08:13:34 -- scripts/common.sh@365 -- # decimal 2 00:22:23.325 08:13:34 -- scripts/common.sh@352 -- # local d=2 00:22:23.325 08:13:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:23.325 08:13:34 -- scripts/common.sh@354 -- # echo 2 00:22:23.325 08:13:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:23.325 08:13:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:23.325 08:13:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:23.325 08:13:34 -- scripts/common.sh@367 -- # return 0 00:22:23.325 08:13:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:23.325 08:13:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:23.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.325 --rc genhtml_branch_coverage=1 00:22:23.325 --rc genhtml_function_coverage=1 00:22:23.325 --rc genhtml_legend=1 00:22:23.325 --rc geninfo_all_blocks=1 00:22:23.325 --rc geninfo_unexecuted_blocks=1 00:22:23.325 00:22:23.325 ' 00:22:23.325 08:13:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:23.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.325 --rc genhtml_branch_coverage=1 00:22:23.325 --rc genhtml_function_coverage=1 00:22:23.325 --rc genhtml_legend=1 00:22:23.325 --rc geninfo_all_blocks=1 00:22:23.325 --rc geninfo_unexecuted_blocks=1 00:22:23.325 00:22:23.325 ' 00:22:23.325 08:13:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:23.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.325 --rc genhtml_branch_coverage=1 00:22:23.325 --rc genhtml_function_coverage=1 00:22:23.325 --rc genhtml_legend=1 00:22:23.325 --rc geninfo_all_blocks=1 00:22:23.325 --rc geninfo_unexecuted_blocks=1 00:22:23.325 00:22:23.325 ' 00:22:23.326 08:13:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:23.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.326 --rc genhtml_branch_coverage=1 00:22:23.326 --rc genhtml_function_coverage=1 00:22:23.326 --rc genhtml_legend=1 00:22:23.326 --rc geninfo_all_blocks=1 00:22:23.326 --rc geninfo_unexecuted_blocks=1 00:22:23.326 00:22:23.326 ' 00:22:23.326 08:13:34 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:23.326 08:13:34 -- nvmf/common.sh@7 -- # uname -s 00:22:23.326 08:13:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.326 08:13:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.326 08:13:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.326 08:13:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.326 08:13:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.326 08:13:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.326 08:13:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.326 08:13:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.326 08:13:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.326 08:13:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.585 08:13:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:22:23.585 08:13:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:22:23.585 08:13:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.585 08:13:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.585 08:13:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:23.585 08:13:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:23.585 08:13:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.585 08:13:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.585 08:13:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.585 08:13:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.585 08:13:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.585 08:13:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.585 08:13:34 -- paths/export.sh@5 -- # export PATH 00:22:23.585 08:13:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.585 08:13:34 -- nvmf/common.sh@46 -- # : 0 00:22:23.585 08:13:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:23.585 08:13:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:23.585 08:13:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:23.585 08:13:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.585 08:13:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.585 08:13:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:23.585 08:13:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:23.585 08:13:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:23.585 08:13:34 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:23.585 08:13:34 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:23.585 08:13:34 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:23.585 08:13:34 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:23.585 08:13:34 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:23.585 08:13:34 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:23.585 08:13:34 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:23.585 08:13:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:23.585 08:13:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.585 08:13:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:23.585 08:13:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:23.585 08:13:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:23.585 08:13:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.585 08:13:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.585 08:13:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.585 08:13:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:23.585 08:13:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:23.585 08:13:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:23.585 08:13:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:23.585 08:13:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:23.585 08:13:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:23.585 08:13:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.585 08:13:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.585 08:13:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:23.585 08:13:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:23.585 08:13:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:23.585 08:13:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:23.585 08:13:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:23.586 08:13:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.586 08:13:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:23.586 08:13:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:23.586 08:13:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:23.586 08:13:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:23.586 08:13:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:23.586 08:13:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:23.586 Cannot find device "nvmf_tgt_br" 00:22:23.586 08:13:34 -- nvmf/common.sh@154 -- # true 00:22:23.586 08:13:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:23.586 Cannot find device "nvmf_tgt_br2" 00:22:23.586 08:13:34 -- nvmf/common.sh@155 -- # true 00:22:23.586 08:13:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:23.586 08:13:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:23.586 Cannot find device "nvmf_tgt_br" 00:22:23.586 08:13:34 -- nvmf/common.sh@157 -- # true 00:22:23.586 08:13:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:23.586 Cannot find device "nvmf_tgt_br2" 00:22:23.586 08:13:34 -- nvmf/common.sh@158 -- # true 00:22:23.586 08:13:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:23.586 08:13:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:23.586 08:13:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:23.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:23.586 08:13:34 -- nvmf/common.sh@161 -- # true 00:22:23.586 08:13:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:23.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:23.586 08:13:34 -- nvmf/common.sh@162 -- # true 00:22:23.586 08:13:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:23.586 08:13:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:23.586 08:13:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:23.586 08:13:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:23.586 08:13:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:23.586 08:13:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:23.586 08:13:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:23.586 08:13:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:23.586 08:13:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:23.586 08:13:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:23.586 08:13:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:23.586 08:13:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:23.586 08:13:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:23.586 08:13:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:23.586 08:13:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:23.586 08:13:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:23.586 08:13:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:23.586 08:13:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:23.586 08:13:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:23.586 08:13:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:23.845 08:13:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:23.845 08:13:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:23.845 08:13:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:23.845 08:13:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:23.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:22:23.845 00:22:23.845 --- 10.0.0.2 ping statistics --- 00:22:23.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.845 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:22:23.845 08:13:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:23.845 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:23.845 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:22:23.845 00:22:23.845 --- 10.0.0.3 ping statistics --- 00:22:23.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.845 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:22:23.845 08:13:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:23.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:22:23.845 00:22:23.845 --- 10.0.0.1 ping statistics --- 00:22:23.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.845 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:22:23.845 08:13:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.845 08:13:34 -- nvmf/common.sh@421 -- # return 0 00:22:23.845 08:13:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:23.845 08:13:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.845 08:13:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:23.845 08:13:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:23.845 08:13:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.845 08:13:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:23.845 08:13:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:23.845 08:13:34 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:23.845 08:13:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:23.845 08:13:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:23.845 08:13:34 -- common/autotest_common.sh@10 -- # set +x 00:22:23.845 08:13:34 -- nvmf/common.sh@469 -- # nvmfpid=96889 00:22:23.845 08:13:34 -- nvmf/common.sh@470 -- # waitforlisten 96889 00:22:23.845 08:13:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:23.845 08:13:34 -- common/autotest_common.sh@829 -- # '[' -z 96889 ']' 00:22:23.845 08:13:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.845 08:13:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:23.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.845 08:13:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.845 08:13:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:23.845 08:13:34 -- common/autotest_common.sh@10 -- # set +x 00:22:23.845 [2024-12-07 08:13:34.977139] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:23.845 [2024-12-07 08:13:34.977235] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.845 [2024-12-07 08:13:35.114602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.103 [2024-12-07 08:13:35.183802] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:24.103 [2024-12-07 08:13:35.183947] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.103 [2024-12-07 08:13:35.183959] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.103 [2024-12-07 08:13:35.183967] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.103 [2024-12-07 08:13:35.183991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.037 08:13:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:25.037 08:13:36 -- common/autotest_common.sh@862 -- # return 0 00:22:25.037 08:13:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:25.037 08:13:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:25.037 08:13:36 -- common/autotest_common.sh@10 -- # set +x 00:22:25.037 08:13:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.037 08:13:36 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:25.037 08:13:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.037 08:13:36 -- common/autotest_common.sh@10 -- # set +x 00:22:25.037 [2024-12-07 08:13:36.061820] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.037 [2024-12-07 08:13:36.069947] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:25.037 null0 00:22:25.037 [2024-12-07 08:13:36.101875] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.037 08:13:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.037 08:13:36 -- host/discovery_remove_ifc.sh@59 -- # hostpid=96939 00:22:25.037 08:13:36 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:25.037 08:13:36 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 96939 /tmp/host.sock 00:22:25.037 08:13:36 -- common/autotest_common.sh@829 -- # '[' -z 96939 ']' 00:22:25.037 08:13:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:25.037 08:13:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:25.037 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:25.037 08:13:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:25.037 08:13:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:25.037 08:13:36 -- common/autotest_common.sh@10 -- # set +x 00:22:25.037 [2024-12-07 08:13:36.179421] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:25.037 [2024-12-07 08:13:36.179520] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96939 ] 00:22:25.296 [2024-12-07 08:13:36.318669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.296 [2024-12-07 08:13:36.390788] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:25.296 [2024-12-07 08:13:36.390944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.863 08:13:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:25.864 08:13:37 -- common/autotest_common.sh@862 -- # return 0 00:22:25.864 08:13:37 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:25.864 08:13:37 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:25.864 08:13:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.864 08:13:37 -- common/autotest_common.sh@10 -- # set +x 00:22:26.122 08:13:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.122 08:13:37 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:26.122 08:13:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.122 08:13:37 -- common/autotest_common.sh@10 -- # set +x 00:22:26.122 08:13:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.122 08:13:37 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:26.122 08:13:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.123 08:13:37 -- common/autotest_common.sh@10 -- # set +x 00:22:27.059 [2024-12-07 08:13:38.254881] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:27.059 [2024-12-07 08:13:38.254936] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:27.059 [2024-12-07 08:13:38.254954] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:27.318 [2024-12-07 08:13:38.340985] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:27.318 [2024-12-07 08:13:38.397826] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:27.318 [2024-12-07 08:13:38.397876] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:27.318 [2024-12-07 08:13:38.397905] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:27.318 [2024-12-07 08:13:38.397921] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:27.318 [2024-12-07 08:13:38.397947] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:27.318 08:13:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.318 08:13:38 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:27.318 08:13:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:27.318 [2024-12-07 08:13:38.403512] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1352da0 was disconnected and freed. delete nvme_qpair. 00:22:27.318 08:13:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:27.318 08:13:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:27.318 08:13:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:27.318 08:13:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.318 08:13:38 -- common/autotest_common.sh@10 -- # set +x 00:22:27.318 08:13:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:27.318 08:13:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.318 08:13:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:27.318 08:13:38 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:27.318 08:13:38 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:27.318 08:13:38 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:27.318 08:13:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:27.318 08:13:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:27.318 08:13:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.318 08:13:38 -- common/autotest_common.sh@10 -- # set +x 00:22:27.318 08:13:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:27.318 08:13:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:27.318 08:13:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:27.318 08:13:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.318 08:13:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:27.318 08:13:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:28.281 08:13:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:28.281 08:13:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:28.281 08:13:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.281 08:13:39 -- common/autotest_common.sh@10 -- # set +x 00:22:28.281 08:13:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:28.281 08:13:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:28.281 08:13:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:28.281 08:13:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.539 08:13:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:28.539 08:13:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:29.473 08:13:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:29.473 08:13:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.473 08:13:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.473 08:13:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:29.473 08:13:40 -- common/autotest_common.sh@10 -- # set +x 00:22:29.473 08:13:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:29.473 08:13:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:29.473 08:13:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.473 08:13:40 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:29.473 08:13:40 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:30.410 08:13:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:30.410 08:13:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:30.410 08:13:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.410 08:13:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:30.410 08:13:41 -- common/autotest_common.sh@10 -- # set +x 00:22:30.410 08:13:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:30.410 08:13:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:30.410 08:13:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.669 08:13:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:30.669 08:13:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:31.607 08:13:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:31.607 08:13:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:31.607 08:13:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.607 08:13:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:31.607 08:13:42 -- common/autotest_common.sh@10 -- # set +x 00:22:31.607 08:13:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:31.607 08:13:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:31.607 08:13:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.607 08:13:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:31.607 08:13:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:32.542 08:13:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:32.542 08:13:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:32.542 08:13:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:32.542 08:13:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.542 08:13:43 -- common/autotest_common.sh@10 -- # set +x 00:22:32.542 08:13:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:32.542 08:13:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:32.542 08:13:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.800 [2024-12-07 08:13:43.825798] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:32.800 [2024-12-07 08:13:43.825867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.800 [2024-12-07 08:13:43.825883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.800 [2024-12-07 08:13:43.825896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.800 [2024-12-07 08:13:43.825906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.800 [2024-12-07 08:13:43.825916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.800 [2024-12-07 08:13:43.825925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.800 [2024-12-07 08:13:43.825935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.800 [2024-12-07 08:13:43.825944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.800 [2024-12-07 08:13:43.825954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.800 [2024-12-07 08:13:43.825963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.800 [2024-12-07 08:13:43.825972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc690 is same with the state(5) to be set 00:22:32.800 08:13:43 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:32.800 08:13:43 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:32.800 [2024-12-07 08:13:43.835791] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12bc690 (9): Bad file descriptor 00:22:32.800 [2024-12-07 08:13:43.845811] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:33.735 08:13:44 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:33.735 08:13:44 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:33.735 08:13:44 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:33.735 08:13:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.735 08:13:44 -- common/autotest_common.sh@10 -- # set +x 00:22:33.735 08:13:44 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:33.735 08:13:44 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:33.735 [2024-12-07 08:13:44.862292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:34.670 [2024-12-07 08:13:45.887327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:34.670 [2024-12-07 08:13:45.887465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12bc690 with addr=10.0.0.2, port=4420 00:22:34.670 [2024-12-07 08:13:45.887500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc690 is same with the state(5) to be set 00:22:34.670 [2024-12-07 08:13:45.887554] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:34.670 [2024-12-07 08:13:45.887577] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:34.670 [2024-12-07 08:13:45.887595] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:34.670 [2024-12-07 08:13:45.887615] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:34.670 [2024-12-07 08:13:45.888522] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12bc690 (9): Bad file descriptor 00:22:34.670 [2024-12-07 08:13:45.888614] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:34.670 [2024-12-07 08:13:45.888665] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:34.670 [2024-12-07 08:13:45.888734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.670 [2024-12-07 08:13:45.888764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.670 [2024-12-07 08:13:45.888790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.670 [2024-12-07 08:13:45.888820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.670 [2024-12-07 08:13:45.888849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.670 [2024-12-07 08:13:45.888876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.670 [2024-12-07 08:13:45.888898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.670 [2024-12-07 08:13:45.888918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.670 [2024-12-07 08:13:45.888940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:34.670 [2024-12-07 08:13:45.888959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:34.670 [2024-12-07 08:13:45.888979] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:34.670 [2024-12-07 08:13:45.889010] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x131a410 (9): Bad file descriptor 00:22:34.670 [2024-12-07 08:13:45.889656] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:34.670 [2024-12-07 08:13:45.889744] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:34.670 08:13:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.670 08:13:45 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:34.670 08:13:45 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:36.045 08:13:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:36.045 08:13:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.045 08:13:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:36.045 08:13:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.045 08:13:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:36.045 08:13:46 -- common/autotest_common.sh@10 -- # set +x 00:22:36.045 08:13:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:36.045 08:13:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.045 08:13:46 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:36.045 08:13:46 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:36.046 08:13:46 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:36.046 08:13:46 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:36.046 08:13:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:36.046 08:13:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.046 08:13:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.046 08:13:46 -- common/autotest_common.sh@10 -- # set +x 00:22:36.046 08:13:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:36.046 08:13:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:36.046 08:13:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:36.046 08:13:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.046 08:13:47 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:36.046 08:13:47 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:36.980 [2024-12-07 08:13:47.896370] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:36.980 [2024-12-07 08:13:47.896396] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:36.980 [2024-12-07 08:13:47.896429] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:36.980 [2024-12-07 08:13:47.982461] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:36.980 [2024-12-07 08:13:48.037438] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:36.980 [2024-12-07 08:13:48.037501] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:36.980 [2024-12-07 08:13:48.037524] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:36.980 [2024-12-07 08:13:48.037539] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:36.980 [2024-12-07 08:13:48.037548] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:36.980 [2024-12-07 08:13:48.044933] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x135b3e0 was disconnected and freed. delete nvme_qpair. 00:22:36.980 08:13:48 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:36.980 08:13:48 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.980 08:13:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.980 08:13:48 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:36.980 08:13:48 -- common/autotest_common.sh@10 -- # set +x 00:22:36.980 08:13:48 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:36.980 08:13:48 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:36.980 08:13:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.980 08:13:48 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:36.980 08:13:48 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:36.980 08:13:48 -- host/discovery_remove_ifc.sh@90 -- # killprocess 96939 00:22:36.980 08:13:48 -- common/autotest_common.sh@936 -- # '[' -z 96939 ']' 00:22:36.980 08:13:48 -- common/autotest_common.sh@940 -- # kill -0 96939 00:22:36.980 08:13:48 -- common/autotest_common.sh@941 -- # uname 00:22:36.980 08:13:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:36.980 08:13:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96939 00:22:36.980 08:13:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:36.980 08:13:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:36.980 killing process with pid 96939 00:22:36.980 08:13:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96939' 00:22:36.980 08:13:48 -- common/autotest_common.sh@955 -- # kill 96939 00:22:36.980 08:13:48 -- common/autotest_common.sh@960 -- # wait 96939 00:22:37.238 08:13:48 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:37.238 08:13:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:37.238 08:13:48 -- nvmf/common.sh@116 -- # sync 00:22:37.238 08:13:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:37.238 08:13:48 -- nvmf/common.sh@119 -- # set +e 00:22:37.238 08:13:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:37.238 08:13:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:37.238 rmmod nvme_tcp 00:22:37.238 rmmod nvme_fabrics 00:22:37.238 rmmod nvme_keyring 00:22:37.238 08:13:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:37.238 08:13:48 -- nvmf/common.sh@123 -- # set -e 00:22:37.238 08:13:48 -- nvmf/common.sh@124 -- # return 0 00:22:37.238 08:13:48 -- nvmf/common.sh@477 -- # '[' -n 96889 ']' 00:22:37.238 08:13:48 -- nvmf/common.sh@478 -- # killprocess 96889 00:22:37.238 08:13:48 -- common/autotest_common.sh@936 -- # '[' -z 96889 ']' 00:22:37.238 08:13:48 -- common/autotest_common.sh@940 -- # kill -0 96889 00:22:37.238 08:13:48 -- common/autotest_common.sh@941 -- # uname 00:22:37.238 08:13:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:37.238 08:13:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96889 00:22:37.239 08:13:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:37.239 08:13:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:37.239 killing process with pid 96889 00:22:37.239 08:13:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96889' 00:22:37.239 08:13:48 -- common/autotest_common.sh@955 -- # kill 96889 00:22:37.239 08:13:48 -- common/autotest_common.sh@960 -- # wait 96889 00:22:37.497 08:13:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:37.497 08:13:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:37.497 08:13:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:37.497 08:13:48 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:37.497 08:13:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:37.497 08:13:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.497 08:13:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.497 08:13:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.497 08:13:48 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:37.497 00:22:37.497 real 0m14.294s 00:22:37.497 user 0m24.642s 00:22:37.497 sys 0m1.546s 00:22:37.497 08:13:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:37.497 08:13:48 -- common/autotest_common.sh@10 -- # set +x 00:22:37.497 ************************************ 00:22:37.497 END TEST nvmf_discovery_remove_ifc 00:22:37.497 ************************************ 00:22:37.497 08:13:48 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:22:37.497 08:13:48 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:37.497 08:13:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:37.497 08:13:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:37.497 08:13:48 -- common/autotest_common.sh@10 -- # set +x 00:22:37.497 ************************************ 00:22:37.497 START TEST nvmf_digest 00:22:37.497 ************************************ 00:22:37.497 08:13:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:37.757 * Looking for test storage... 00:22:37.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:37.757 08:13:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:37.757 08:13:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:37.757 08:13:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:37.757 08:13:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:37.757 08:13:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:37.757 08:13:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:37.757 08:13:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:37.757 08:13:48 -- scripts/common.sh@335 -- # IFS=.-: 00:22:37.757 08:13:48 -- scripts/common.sh@335 -- # read -ra ver1 00:22:37.757 08:13:48 -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.757 08:13:48 -- scripts/common.sh@336 -- # read -ra ver2 00:22:37.757 08:13:48 -- scripts/common.sh@337 -- # local 'op=<' 00:22:37.757 08:13:48 -- scripts/common.sh@339 -- # ver1_l=2 00:22:37.757 08:13:48 -- scripts/common.sh@340 -- # ver2_l=1 00:22:37.757 08:13:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:37.757 08:13:48 -- scripts/common.sh@343 -- # case "$op" in 00:22:37.757 08:13:48 -- scripts/common.sh@344 -- # : 1 00:22:37.757 08:13:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:37.757 08:13:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.757 08:13:48 -- scripts/common.sh@364 -- # decimal 1 00:22:37.757 08:13:48 -- scripts/common.sh@352 -- # local d=1 00:22:37.757 08:13:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.757 08:13:48 -- scripts/common.sh@354 -- # echo 1 00:22:37.757 08:13:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:37.757 08:13:48 -- scripts/common.sh@365 -- # decimal 2 00:22:37.757 08:13:48 -- scripts/common.sh@352 -- # local d=2 00:22:37.757 08:13:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.757 08:13:48 -- scripts/common.sh@354 -- # echo 2 00:22:37.757 08:13:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:37.757 08:13:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:37.757 08:13:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:37.757 08:13:48 -- scripts/common.sh@367 -- # return 0 00:22:37.757 08:13:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.757 08:13:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:37.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.757 --rc genhtml_branch_coverage=1 00:22:37.757 --rc genhtml_function_coverage=1 00:22:37.757 --rc genhtml_legend=1 00:22:37.757 --rc geninfo_all_blocks=1 00:22:37.757 --rc geninfo_unexecuted_blocks=1 00:22:37.757 00:22:37.757 ' 00:22:37.757 08:13:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:37.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.757 --rc genhtml_branch_coverage=1 00:22:37.757 --rc genhtml_function_coverage=1 00:22:37.757 --rc genhtml_legend=1 00:22:37.757 --rc geninfo_all_blocks=1 00:22:37.757 --rc geninfo_unexecuted_blocks=1 00:22:37.757 00:22:37.757 ' 00:22:37.757 08:13:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:37.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.757 --rc genhtml_branch_coverage=1 00:22:37.757 --rc genhtml_function_coverage=1 00:22:37.757 --rc genhtml_legend=1 00:22:37.757 --rc geninfo_all_blocks=1 00:22:37.757 --rc geninfo_unexecuted_blocks=1 00:22:37.757 00:22:37.757 ' 00:22:37.757 08:13:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:37.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.757 --rc genhtml_branch_coverage=1 00:22:37.757 --rc genhtml_function_coverage=1 00:22:37.757 --rc genhtml_legend=1 00:22:37.757 --rc geninfo_all_blocks=1 00:22:37.757 --rc geninfo_unexecuted_blocks=1 00:22:37.757 00:22:37.757 ' 00:22:37.757 08:13:48 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:37.757 08:13:48 -- nvmf/common.sh@7 -- # uname -s 00:22:37.757 08:13:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.757 08:13:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.757 08:13:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.757 08:13:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.757 08:13:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.757 08:13:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.757 08:13:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.757 08:13:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.757 08:13:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.757 08:13:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.757 08:13:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:22:37.757 08:13:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:22:37.757 08:13:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.757 08:13:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.757 08:13:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:37.757 08:13:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:37.757 08:13:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.757 08:13:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.757 08:13:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.757 08:13:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.757 08:13:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.757 08:13:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.757 08:13:48 -- paths/export.sh@5 -- # export PATH 00:22:37.757 08:13:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.757 08:13:48 -- nvmf/common.sh@46 -- # : 0 00:22:37.757 08:13:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:37.757 08:13:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:37.757 08:13:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:37.757 08:13:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.757 08:13:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.757 08:13:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:37.757 08:13:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:37.757 08:13:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:37.757 08:13:48 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:37.757 08:13:48 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:37.757 08:13:48 -- host/digest.sh@16 -- # runtime=2 00:22:37.757 08:13:48 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:22:37.757 08:13:48 -- host/digest.sh@132 -- # nvmftestinit 00:22:37.757 08:13:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:37.757 08:13:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.757 08:13:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:37.758 08:13:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:37.758 08:13:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:37.758 08:13:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.758 08:13:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.758 08:13:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.758 08:13:48 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:37.758 08:13:48 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:37.758 08:13:48 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:37.758 08:13:48 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:37.758 08:13:48 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:37.758 08:13:48 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:37.758 08:13:48 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.758 08:13:48 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.758 08:13:48 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:37.758 08:13:48 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:37.758 08:13:48 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:37.758 08:13:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:37.758 08:13:48 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:37.758 08:13:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.758 08:13:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:37.758 08:13:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:37.758 08:13:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:37.758 08:13:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:37.758 08:13:48 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:37.758 08:13:48 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:37.758 Cannot find device "nvmf_tgt_br" 00:22:37.758 08:13:49 -- nvmf/common.sh@154 -- # true 00:22:37.758 08:13:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:37.758 Cannot find device "nvmf_tgt_br2" 00:22:37.758 08:13:49 -- nvmf/common.sh@155 -- # true 00:22:37.758 08:13:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:37.758 08:13:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:38.016 Cannot find device "nvmf_tgt_br" 00:22:38.016 08:13:49 -- nvmf/common.sh@157 -- # true 00:22:38.016 08:13:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:38.016 Cannot find device "nvmf_tgt_br2" 00:22:38.016 08:13:49 -- nvmf/common.sh@158 -- # true 00:22:38.016 08:13:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:38.016 08:13:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:38.016 08:13:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:38.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:38.016 08:13:49 -- nvmf/common.sh@161 -- # true 00:22:38.016 08:13:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:38.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:38.016 08:13:49 -- nvmf/common.sh@162 -- # true 00:22:38.016 08:13:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:38.016 08:13:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:38.016 08:13:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:38.016 08:13:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:38.016 08:13:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:38.016 08:13:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:38.016 08:13:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:38.016 08:13:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:38.016 08:13:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:38.016 08:13:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:38.016 08:13:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:38.016 08:13:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:38.016 08:13:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:38.016 08:13:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:38.016 08:13:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:38.016 08:13:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:38.016 08:13:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:38.016 08:13:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:38.016 08:13:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:38.016 08:13:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:38.016 08:13:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:38.016 08:13:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:38.016 08:13:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:38.275 08:13:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:38.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:22:38.276 00:22:38.276 --- 10.0.0.2 ping statistics --- 00:22:38.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.276 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:22:38.276 08:13:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:38.276 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:38.276 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:22:38.276 00:22:38.276 --- 10.0.0.3 ping statistics --- 00:22:38.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.276 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:22:38.276 08:13:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:38.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:22:38.276 00:22:38.276 --- 10.0.0.1 ping statistics --- 00:22:38.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.276 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:22:38.276 08:13:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.276 08:13:49 -- nvmf/common.sh@421 -- # return 0 00:22:38.276 08:13:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:38.276 08:13:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.276 08:13:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:38.276 08:13:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:38.276 08:13:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.276 08:13:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:38.276 08:13:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:38.276 08:13:49 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:38.276 08:13:49 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:22:38.276 08:13:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:38.276 08:13:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:38.276 08:13:49 -- common/autotest_common.sh@10 -- # set +x 00:22:38.276 ************************************ 00:22:38.276 START TEST nvmf_digest_clean 00:22:38.276 ************************************ 00:22:38.276 08:13:49 -- common/autotest_common.sh@1114 -- # run_digest 00:22:38.276 08:13:49 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:22:38.276 08:13:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:38.276 08:13:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:38.276 08:13:49 -- common/autotest_common.sh@10 -- # set +x 00:22:38.276 08:13:49 -- nvmf/common.sh@469 -- # nvmfpid=97367 00:22:38.276 08:13:49 -- nvmf/common.sh@470 -- # waitforlisten 97367 00:22:38.276 08:13:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:38.276 08:13:49 -- common/autotest_common.sh@829 -- # '[' -z 97367 ']' 00:22:38.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.276 08:13:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.276 08:13:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.276 08:13:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.276 08:13:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.276 08:13:49 -- common/autotest_common.sh@10 -- # set +x 00:22:38.276 [2024-12-07 08:13:49.401352] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:38.276 [2024-12-07 08:13:49.401636] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.276 [2024-12-07 08:13:49.543514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.535 [2024-12-07 08:13:49.612345] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:38.535 [2024-12-07 08:13:49.612480] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.535 [2024-12-07 08:13:49.612493] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.535 [2024-12-07 08:13:49.612501] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.535 [2024-12-07 08:13:49.612524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.535 08:13:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:38.535 08:13:49 -- common/autotest_common.sh@862 -- # return 0 00:22:38.535 08:13:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:38.535 08:13:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:38.535 08:13:49 -- common/autotest_common.sh@10 -- # set +x 00:22:38.535 08:13:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.535 08:13:49 -- host/digest.sh@120 -- # common_target_config 00:22:38.535 08:13:49 -- host/digest.sh@43 -- # rpc_cmd 00:22:38.535 08:13:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.535 08:13:49 -- common/autotest_common.sh@10 -- # set +x 00:22:38.535 null0 00:22:38.535 [2024-12-07 08:13:49.798192] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.794 [2024-12-07 08:13:49.822315] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.794 08:13:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.794 08:13:49 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:22:38.794 08:13:49 -- host/digest.sh@77 -- # local rw bs qd 00:22:38.794 08:13:49 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:38.794 08:13:49 -- host/digest.sh@80 -- # rw=randread 00:22:38.794 08:13:49 -- host/digest.sh@80 -- # bs=4096 00:22:38.794 08:13:49 -- host/digest.sh@80 -- # qd=128 00:22:38.794 08:13:49 -- host/digest.sh@82 -- # bperfpid=97404 00:22:38.794 08:13:49 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:38.794 08:13:49 -- host/digest.sh@83 -- # waitforlisten 97404 /var/tmp/bperf.sock 00:22:38.794 08:13:49 -- common/autotest_common.sh@829 -- # '[' -z 97404 ']' 00:22:38.794 08:13:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:38.794 08:13:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.794 08:13:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:38.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:38.794 08:13:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.794 08:13:49 -- common/autotest_common.sh@10 -- # set +x 00:22:38.794 [2024-12-07 08:13:49.883404] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:38.794 [2024-12-07 08:13:49.883766] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97404 ] 00:22:38.794 [2024-12-07 08:13:50.025513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.056 [2024-12-07 08:13:50.112812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.992 08:13:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.992 08:13:50 -- common/autotest_common.sh@862 -- # return 0 00:22:39.992 08:13:50 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:39.992 08:13:50 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:39.992 08:13:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:39.992 08:13:51 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:39.992 08:13:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:40.560 nvme0n1 00:22:40.560 08:13:51 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:40.560 08:13:51 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:40.560 Running I/O for 2 seconds... 00:22:42.465 00:22:42.465 Latency(us) 00:22:42.465 [2024-12-07T08:13:53.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.465 [2024-12-07T08:13:53.741Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:42.465 nvme0n1 : 2.00 21285.59 83.15 0.00 0.00 6008.19 2457.60 20852.36 00:22:42.465 [2024-12-07T08:13:53.741Z] =================================================================================================================== 00:22:42.465 [2024-12-07T08:13:53.741Z] Total : 21285.59 83.15 0.00 0.00 6008.19 2457.60 20852.36 00:22:42.465 0 00:22:42.465 08:13:53 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:42.465 08:13:53 -- host/digest.sh@92 -- # get_accel_stats 00:22:42.465 08:13:53 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:42.465 | select(.opcode=="crc32c") 00:22:42.465 | "\(.module_name) \(.executed)"' 00:22:42.465 08:13:53 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:42.465 08:13:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:42.725 08:13:53 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:42.725 08:13:53 -- host/digest.sh@93 -- # exp_module=software 00:22:42.725 08:13:53 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:42.725 08:13:53 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:42.725 08:13:53 -- host/digest.sh@97 -- # killprocess 97404 00:22:42.725 08:13:53 -- common/autotest_common.sh@936 -- # '[' -z 97404 ']' 00:22:42.725 08:13:53 -- common/autotest_common.sh@940 -- # kill -0 97404 00:22:42.725 08:13:53 -- common/autotest_common.sh@941 -- # uname 00:22:42.725 08:13:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:42.725 08:13:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97404 00:22:42.725 08:13:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:42.725 08:13:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:42.725 killing process with pid 97404 00:22:42.725 08:13:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97404' 00:22:42.725 Received shutdown signal, test time was about 2.000000 seconds 00:22:42.725 00:22:42.725 Latency(us) 00:22:42.725 [2024-12-07T08:13:54.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.725 [2024-12-07T08:13:54.001Z] =================================================================================================================== 00:22:42.725 [2024-12-07T08:13:54.001Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:42.725 08:13:53 -- common/autotest_common.sh@955 -- # kill 97404 00:22:42.725 08:13:53 -- common/autotest_common.sh@960 -- # wait 97404 00:22:42.984 08:13:54 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:42.984 08:13:54 -- host/digest.sh@77 -- # local rw bs qd 00:22:42.984 08:13:54 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:42.984 08:13:54 -- host/digest.sh@80 -- # rw=randread 00:22:42.984 08:13:54 -- host/digest.sh@80 -- # bs=131072 00:22:42.984 08:13:54 -- host/digest.sh@80 -- # qd=16 00:22:42.984 08:13:54 -- host/digest.sh@82 -- # bperfpid=97494 00:22:42.984 08:13:54 -- host/digest.sh@83 -- # waitforlisten 97494 /var/tmp/bperf.sock 00:22:42.984 08:13:54 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:42.984 08:13:54 -- common/autotest_common.sh@829 -- # '[' -z 97494 ']' 00:22:42.984 08:13:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:42.984 08:13:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:42.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:42.984 08:13:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:42.984 08:13:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:42.984 08:13:54 -- common/autotest_common.sh@10 -- # set +x 00:22:42.984 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:42.984 Zero copy mechanism will not be used. 00:22:42.984 [2024-12-07 08:13:54.195234] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:42.984 [2024-12-07 08:13:54.195344] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97494 ] 00:22:43.243 [2024-12-07 08:13:54.333572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.243 [2024-12-07 08:13:54.404756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.243 08:13:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:43.243 08:13:54 -- common/autotest_common.sh@862 -- # return 0 00:22:43.243 08:13:54 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:43.243 08:13:54 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:43.243 08:13:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:43.811 08:13:54 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:43.811 08:13:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:44.069 nvme0n1 00:22:44.069 08:13:55 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:44.069 08:13:55 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:44.069 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:44.069 Zero copy mechanism will not be used. 00:22:44.069 Running I/O for 2 seconds... 00:22:46.003 00:22:46.003 Latency(us) 00:22:46.003 [2024-12-07T08:13:57.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.003 [2024-12-07T08:13:57.279Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:46.003 nvme0n1 : 2.00 9667.82 1208.48 0.00 0.00 1651.94 580.89 3619.37 00:22:46.003 [2024-12-07T08:13:57.279Z] =================================================================================================================== 00:22:46.003 [2024-12-07T08:13:57.279Z] Total : 9667.82 1208.48 0.00 0.00 1651.94 580.89 3619.37 00:22:46.003 0 00:22:46.003 08:13:57 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:46.003 08:13:57 -- host/digest.sh@92 -- # get_accel_stats 00:22:46.003 08:13:57 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:46.003 08:13:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:46.003 08:13:57 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:46.003 | select(.opcode=="crc32c") 00:22:46.003 | "\(.module_name) \(.executed)"' 00:22:46.262 08:13:57 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:46.262 08:13:57 -- host/digest.sh@93 -- # exp_module=software 00:22:46.262 08:13:57 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:46.262 08:13:57 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:46.262 08:13:57 -- host/digest.sh@97 -- # killprocess 97494 00:22:46.262 08:13:57 -- common/autotest_common.sh@936 -- # '[' -z 97494 ']' 00:22:46.262 08:13:57 -- common/autotest_common.sh@940 -- # kill -0 97494 00:22:46.262 08:13:57 -- common/autotest_common.sh@941 -- # uname 00:22:46.262 08:13:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:46.262 08:13:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97494 00:22:46.262 08:13:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:46.262 08:13:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:46.262 killing process with pid 97494 00:22:46.262 08:13:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97494' 00:22:46.262 Received shutdown signal, test time was about 2.000000 seconds 00:22:46.262 00:22:46.262 Latency(us) 00:22:46.262 [2024-12-07T08:13:57.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.262 [2024-12-07T08:13:57.538Z] =================================================================================================================== 00:22:46.262 [2024-12-07T08:13:57.538Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:46.262 08:13:57 -- common/autotest_common.sh@955 -- # kill 97494 00:22:46.262 08:13:57 -- common/autotest_common.sh@960 -- # wait 97494 00:22:46.522 08:13:57 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:22:46.522 08:13:57 -- host/digest.sh@77 -- # local rw bs qd 00:22:46.522 08:13:57 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:46.522 08:13:57 -- host/digest.sh@80 -- # rw=randwrite 00:22:46.522 08:13:57 -- host/digest.sh@80 -- # bs=4096 00:22:46.522 08:13:57 -- host/digest.sh@80 -- # qd=128 00:22:46.522 08:13:57 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:46.522 08:13:57 -- host/digest.sh@82 -- # bperfpid=97570 00:22:46.522 08:13:57 -- host/digest.sh@83 -- # waitforlisten 97570 /var/tmp/bperf.sock 00:22:46.522 08:13:57 -- common/autotest_common.sh@829 -- # '[' -z 97570 ']' 00:22:46.522 08:13:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:46.522 08:13:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:46.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:46.522 08:13:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:46.522 08:13:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:46.522 08:13:57 -- common/autotest_common.sh@10 -- # set +x 00:22:46.522 [2024-12-07 08:13:57.766748] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:46.522 [2024-12-07 08:13:57.766835] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97570 ] 00:22:46.782 [2024-12-07 08:13:57.899103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.782 [2024-12-07 08:13:57.970868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.782 08:13:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:46.782 08:13:58 -- common/autotest_common.sh@862 -- # return 0 00:22:46.782 08:13:58 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:46.782 08:13:58 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:46.782 08:13:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:47.350 08:13:58 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:47.350 08:13:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:47.608 nvme0n1 00:22:47.608 08:13:58 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:47.608 08:13:58 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:47.608 Running I/O for 2 seconds... 00:22:50.139 00:22:50.139 Latency(us) 00:22:50.139 [2024-12-07T08:14:01.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.139 [2024-12-07T08:14:01.415Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:50.139 nvme0n1 : 2.00 25640.17 100.16 0.00 0.00 4986.80 2010.76 15192.44 00:22:50.139 [2024-12-07T08:14:01.415Z] =================================================================================================================== 00:22:50.139 [2024-12-07T08:14:01.415Z] Total : 25640.17 100.16 0.00 0.00 4986.80 2010.76 15192.44 00:22:50.139 0 00:22:50.139 08:14:00 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:50.139 08:14:00 -- host/digest.sh@92 -- # get_accel_stats 00:22:50.139 08:14:00 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:50.139 08:14:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:50.139 08:14:00 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:50.139 | select(.opcode=="crc32c") 00:22:50.139 | "\(.module_name) \(.executed)"' 00:22:50.139 08:14:01 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:50.139 08:14:01 -- host/digest.sh@93 -- # exp_module=software 00:22:50.139 08:14:01 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:50.139 08:14:01 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:50.139 08:14:01 -- host/digest.sh@97 -- # killprocess 97570 00:22:50.139 08:14:01 -- common/autotest_common.sh@936 -- # '[' -z 97570 ']' 00:22:50.139 08:14:01 -- common/autotest_common.sh@940 -- # kill -0 97570 00:22:50.139 08:14:01 -- common/autotest_common.sh@941 -- # uname 00:22:50.139 08:14:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:50.139 08:14:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97570 00:22:50.139 killing process with pid 97570 00:22:50.139 Received shutdown signal, test time was about 2.000000 seconds 00:22:50.139 00:22:50.139 Latency(us) 00:22:50.139 [2024-12-07T08:14:01.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.139 [2024-12-07T08:14:01.415Z] =================================================================================================================== 00:22:50.139 [2024-12-07T08:14:01.415Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:50.139 08:14:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:50.139 08:14:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:50.140 08:14:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97570' 00:22:50.140 08:14:01 -- common/autotest_common.sh@955 -- # kill 97570 00:22:50.140 08:14:01 -- common/autotest_common.sh@960 -- # wait 97570 00:22:50.140 08:14:01 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:22:50.140 08:14:01 -- host/digest.sh@77 -- # local rw bs qd 00:22:50.140 08:14:01 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:50.140 08:14:01 -- host/digest.sh@80 -- # rw=randwrite 00:22:50.140 08:14:01 -- host/digest.sh@80 -- # bs=131072 00:22:50.140 08:14:01 -- host/digest.sh@80 -- # qd=16 00:22:50.140 08:14:01 -- host/digest.sh@82 -- # bperfpid=97643 00:22:50.140 08:14:01 -- host/digest.sh@83 -- # waitforlisten 97643 /var/tmp/bperf.sock 00:22:50.140 08:14:01 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:50.140 08:14:01 -- common/autotest_common.sh@829 -- # '[' -z 97643 ']' 00:22:50.140 08:14:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:50.140 08:14:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:50.140 08:14:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:50.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:50.140 08:14:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:50.140 08:14:01 -- common/autotest_common.sh@10 -- # set +x 00:22:50.398 [2024-12-07 08:14:01.455294] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:50.398 [2024-12-07 08:14:01.455631] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97643 ] 00:22:50.398 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:50.398 Zero copy mechanism will not be used. 00:22:50.398 [2024-12-07 08:14:01.594029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.398 [2024-12-07 08:14:01.672479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.333 08:14:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:51.333 08:14:02 -- common/autotest_common.sh@862 -- # return 0 00:22:51.333 08:14:02 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:51.333 08:14:02 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:51.333 08:14:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:51.592 08:14:02 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:51.592 08:14:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:51.851 nvme0n1 00:22:51.851 08:14:03 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:51.851 08:14:03 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:52.110 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:52.110 Zero copy mechanism will not be used. 00:22:52.110 Running I/O for 2 seconds... 00:22:54.016 00:22:54.016 Latency(us) 00:22:54.016 [2024-12-07T08:14:05.292Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.016 [2024-12-07T08:14:05.292Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:54.016 nvme0n1 : 2.00 8179.64 1022.46 0.00 0.00 1951.52 1586.27 4557.73 00:22:54.016 [2024-12-07T08:14:05.292Z] =================================================================================================================== 00:22:54.016 [2024-12-07T08:14:05.292Z] Total : 8179.64 1022.46 0.00 0.00 1951.52 1586.27 4557.73 00:22:54.016 0 00:22:54.016 08:14:05 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:54.016 08:14:05 -- host/digest.sh@92 -- # get_accel_stats 00:22:54.016 08:14:05 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:54.016 08:14:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:54.016 08:14:05 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:54.016 | select(.opcode=="crc32c") 00:22:54.016 | "\(.module_name) \(.executed)"' 00:22:54.275 08:14:05 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:54.275 08:14:05 -- host/digest.sh@93 -- # exp_module=software 00:22:54.275 08:14:05 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:54.275 08:14:05 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:54.275 08:14:05 -- host/digest.sh@97 -- # killprocess 97643 00:22:54.275 08:14:05 -- common/autotest_common.sh@936 -- # '[' -z 97643 ']' 00:22:54.275 08:14:05 -- common/autotest_common.sh@940 -- # kill -0 97643 00:22:54.275 08:14:05 -- common/autotest_common.sh@941 -- # uname 00:22:54.275 08:14:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:54.275 08:14:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97643 00:22:54.275 killing process with pid 97643 00:22:54.275 Received shutdown signal, test time was about 2.000000 seconds 00:22:54.275 00:22:54.275 Latency(us) 00:22:54.275 [2024-12-07T08:14:05.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.275 [2024-12-07T08:14:05.551Z] =================================================================================================================== 00:22:54.275 [2024-12-07T08:14:05.551Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.275 08:14:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:54.275 08:14:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:54.275 08:14:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97643' 00:22:54.275 08:14:05 -- common/autotest_common.sh@955 -- # kill 97643 00:22:54.275 08:14:05 -- common/autotest_common.sh@960 -- # wait 97643 00:22:54.533 08:14:05 -- host/digest.sh@126 -- # killprocess 97367 00:22:54.533 08:14:05 -- common/autotest_common.sh@936 -- # '[' -z 97367 ']' 00:22:54.533 08:14:05 -- common/autotest_common.sh@940 -- # kill -0 97367 00:22:54.533 08:14:05 -- common/autotest_common.sh@941 -- # uname 00:22:54.533 08:14:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:54.533 08:14:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97367 00:22:54.533 killing process with pid 97367 00:22:54.533 08:14:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:54.533 08:14:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:54.533 08:14:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97367' 00:22:54.533 08:14:05 -- common/autotest_common.sh@955 -- # kill 97367 00:22:54.533 08:14:05 -- common/autotest_common.sh@960 -- # wait 97367 00:22:54.791 00:22:54.791 real 0m16.626s 00:22:54.791 user 0m31.854s 00:22:54.791 sys 0m4.597s 00:22:54.791 08:14:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:54.791 ************************************ 00:22:54.791 END TEST nvmf_digest_clean 00:22:54.791 ************************************ 00:22:54.791 08:14:05 -- common/autotest_common.sh@10 -- # set +x 00:22:54.791 08:14:06 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:22:54.791 08:14:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:54.791 08:14:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:54.791 08:14:06 -- common/autotest_common.sh@10 -- # set +x 00:22:54.791 ************************************ 00:22:54.791 START TEST nvmf_digest_error 00:22:54.791 ************************************ 00:22:54.791 08:14:06 -- common/autotest_common.sh@1114 -- # run_digest_error 00:22:54.791 08:14:06 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:22:54.791 08:14:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:54.791 08:14:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:54.791 08:14:06 -- common/autotest_common.sh@10 -- # set +x 00:22:54.791 08:14:06 -- nvmf/common.sh@469 -- # nvmfpid=97761 00:22:54.791 08:14:06 -- nvmf/common.sh@470 -- # waitforlisten 97761 00:22:54.791 08:14:06 -- common/autotest_common.sh@829 -- # '[' -z 97761 ']' 00:22:54.791 08:14:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:54.791 08:14:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.791 08:14:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:54.791 08:14:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.791 08:14:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:54.791 08:14:06 -- common/autotest_common.sh@10 -- # set +x 00:22:55.050 [2024-12-07 08:14:06.077713] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:55.050 [2024-12-07 08:14:06.077824] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.050 [2024-12-07 08:14:06.218640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.050 [2024-12-07 08:14:06.296936] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:55.050 [2024-12-07 08:14:06.297110] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.050 [2024-12-07 08:14:06.297122] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.050 [2024-12-07 08:14:06.297130] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.050 [2024-12-07 08:14:06.297153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.985 08:14:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:55.985 08:14:06 -- common/autotest_common.sh@862 -- # return 0 00:22:55.985 08:14:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:55.985 08:14:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:55.985 08:14:06 -- common/autotest_common.sh@10 -- # set +x 00:22:55.985 08:14:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.985 08:14:07 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:55.985 08:14:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.985 08:14:07 -- common/autotest_common.sh@10 -- # set +x 00:22:55.985 [2024-12-07 08:14:07.037797] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:55.985 08:14:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.985 08:14:07 -- host/digest.sh@104 -- # common_target_config 00:22:55.985 08:14:07 -- host/digest.sh@43 -- # rpc_cmd 00:22:55.985 08:14:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.985 08:14:07 -- common/autotest_common.sh@10 -- # set +x 00:22:55.985 null0 00:22:55.985 [2024-12-07 08:14:07.141694] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.985 [2024-12-07 08:14:07.165802] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.985 08:14:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.985 08:14:07 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:22:55.985 08:14:07 -- host/digest.sh@54 -- # local rw bs qd 00:22:55.985 08:14:07 -- host/digest.sh@56 -- # rw=randread 00:22:55.985 08:14:07 -- host/digest.sh@56 -- # bs=4096 00:22:55.985 08:14:07 -- host/digest.sh@56 -- # qd=128 00:22:55.985 08:14:07 -- host/digest.sh@58 -- # bperfpid=97802 00:22:55.985 08:14:07 -- host/digest.sh@60 -- # waitforlisten 97802 /var/tmp/bperf.sock 00:22:55.985 08:14:07 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:55.985 08:14:07 -- common/autotest_common.sh@829 -- # '[' -z 97802 ']' 00:22:55.985 08:14:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:55.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:55.985 08:14:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.985 08:14:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:55.985 08:14:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.985 08:14:07 -- common/autotest_common.sh@10 -- # set +x 00:22:55.985 [2024-12-07 08:14:07.227349] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:55.985 [2024-12-07 08:14:07.227452] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97802 ] 00:22:56.243 [2024-12-07 08:14:07.368843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.243 [2024-12-07 08:14:07.441107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.177 08:14:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:57.177 08:14:08 -- common/autotest_common.sh@862 -- # return 0 00:22:57.177 08:14:08 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:57.177 08:14:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:57.435 08:14:08 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:57.435 08:14:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.435 08:14:08 -- common/autotest_common.sh@10 -- # set +x 00:22:57.435 08:14:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.435 08:14:08 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:57.435 08:14:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:57.693 nvme0n1 00:22:57.693 08:14:08 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:57.693 08:14:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.693 08:14:08 -- common/autotest_common.sh@10 -- # set +x 00:22:57.693 08:14:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.693 08:14:08 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:57.693 08:14:08 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:57.693 Running I/O for 2 seconds... 00:22:57.693 [2024-12-07 08:14:08.967535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.693 [2024-12-07 08:14:08.967598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.693 [2024-12-07 08:14:08.967630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.951 [2024-12-07 08:14:08.981517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.951 [2024-12-07 08:14:08.981570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.951 [2024-12-07 08:14:08.981599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.951 [2024-12-07 08:14:08.994489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.951 [2024-12-07 08:14:08.994540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.951 [2024-12-07 08:14:08.994570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.951 [2024-12-07 08:14:09.008008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.951 [2024-12-07 08:14:09.008059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.951 [2024-12-07 08:14:09.008088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.951 [2024-12-07 08:14:09.021739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.951 [2024-12-07 08:14:09.021791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.951 [2024-12-07 08:14:09.021821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.951 [2024-12-07 08:14:09.030545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.951 [2024-12-07 08:14:09.030595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.951 [2024-12-07 08:14:09.030624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.951 [2024-12-07 08:14:09.043564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.951 [2024-12-07 08:14:09.043615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.951 [2024-12-07 08:14:09.043644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.951 [2024-12-07 08:14:09.057639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.951 [2024-12-07 08:14:09.057730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.951 [2024-12-07 08:14:09.057762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.951 [2024-12-07 08:14:09.070460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.951 [2024-12-07 08:14:09.070501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.951 [2024-12-07 08:14:09.070515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.951 [2024-12-07 08:14:09.081588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.951 [2024-12-07 08:14:09.081641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.951 [2024-12-07 08:14:09.081677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.951 [2024-12-07 08:14:09.092502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.951 [2024-12-07 08:14:09.092553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.951 [2024-12-07 08:14:09.092581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.951 [2024-12-07 08:14:09.102683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.951 [2024-12-07 08:14:09.102733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.951 [2024-12-07 08:14:09.102762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.951 [2024-12-07 08:14:09.114224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.951 [2024-12-07 08:14:09.114286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.951 [2024-12-07 08:14:09.114315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.951 [2024-12-07 08:14:09.127164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.951 [2024-12-07 08:14:09.127242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.951 [2024-12-07 08:14:09.127255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.951 [2024-12-07 08:14:09.136896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.951 [2024-12-07 08:14:09.136946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.951 [2024-12-07 08:14:09.136975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.951 [2024-12-07 08:14:09.148594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.951 [2024-12-07 08:14:09.148646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.951 [2024-12-07 08:14:09.148675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.951 [2024-12-07 08:14:09.158515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.951 [2024-12-07 08:14:09.158567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.951 [2024-12-07 08:14:09.158596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.951 [2024-12-07 08:14:09.170494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.951 [2024-12-07 08:14:09.170544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.951 [2024-12-07 08:14:09.170572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.951 [2024-12-07 08:14:09.182895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.951 [2024-12-07 08:14:09.182946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.951 [2024-12-07 08:14:09.182975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.951 [2024-12-07 08:14:09.195903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.951 [2024-12-07 08:14:09.195955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.951 [2024-12-07 08:14:09.195984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.951 [2024-12-07 08:14:09.208611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.951 [2024-12-07 08:14:09.208662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.952 [2024-12-07 08:14:09.208691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:57.952 [2024-12-07 08:14:09.217790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:57.952 [2024-12-07 08:14:09.217854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:57.952 [2024-12-07 08:14:09.217884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.210 [2024-12-07 08:14:09.233238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.210 [2024-12-07 08:14:09.233288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.210 [2024-12-07 08:14:09.233316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.210 [2024-12-07 08:14:09.243476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.210 [2024-12-07 08:14:09.243528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.210 [2024-12-07 08:14:09.243557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.210 [2024-12-07 08:14:09.253545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.210 [2024-12-07 08:14:09.253594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.210 [2024-12-07 08:14:09.253623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.210 [2024-12-07 08:14:09.263783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.210 [2024-12-07 08:14:09.263834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.210 [2024-12-07 08:14:09.263863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.210 [2024-12-07 08:14:09.273499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.210 [2024-12-07 08:14:09.273550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.210 [2024-12-07 08:14:09.273579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.210 [2024-12-07 08:14:09.283445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.210 [2024-12-07 08:14:09.283495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.210 [2024-12-07 08:14:09.283524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.210 [2024-12-07 08:14:09.293370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.210 [2024-12-07 08:14:09.293420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.210 [2024-12-07 08:14:09.293449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.210 [2024-12-07 08:14:09.303416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.210 [2024-12-07 08:14:09.303467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.210 [2024-12-07 08:14:09.303497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.210 [2024-12-07 08:14:09.315392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.210 [2024-12-07 08:14:09.315442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.210 [2024-12-07 08:14:09.315471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.210 [2024-12-07 08:14:09.326587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.210 [2024-12-07 08:14:09.326638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.210 [2024-12-07 08:14:09.326667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.210 [2024-12-07 08:14:09.337563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.210 [2024-12-07 08:14:09.337614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.210 [2024-12-07 08:14:09.337643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.210 [2024-12-07 08:14:09.349427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.210 [2024-12-07 08:14:09.349479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.210 [2024-12-07 08:14:09.349508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.210 [2024-12-07 08:14:09.360150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.210 [2024-12-07 08:14:09.360224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.210 [2024-12-07 08:14:09.360238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.210 [2024-12-07 08:14:09.370378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.210 [2024-12-07 08:14:09.370428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.210 [2024-12-07 08:14:09.370457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.210 [2024-12-07 08:14:09.380706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.210 [2024-12-07 08:14:09.380757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.210 [2024-12-07 08:14:09.380785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.210 [2024-12-07 08:14:09.392954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.210 [2024-12-07 08:14:09.393005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.210 [2024-12-07 08:14:09.393034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.210 [2024-12-07 08:14:09.403623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.211 [2024-12-07 08:14:09.403673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.211 [2024-12-07 08:14:09.403701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.211 [2024-12-07 08:14:09.414839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.211 [2024-12-07 08:14:09.414890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.211 [2024-12-07 08:14:09.414919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.211 [2024-12-07 08:14:09.426955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.211 [2024-12-07 08:14:09.427005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.211 [2024-12-07 08:14:09.427034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.211 [2024-12-07 08:14:09.438936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.211 [2024-12-07 08:14:09.438986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.211 [2024-12-07 08:14:09.439015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.211 [2024-12-07 08:14:09.449286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.211 [2024-12-07 08:14:09.449335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.211 [2024-12-07 08:14:09.449364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.211 [2024-12-07 08:14:09.461776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.211 [2024-12-07 08:14:09.461830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.211 [2024-12-07 08:14:09.461859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.211 [2024-12-07 08:14:09.472405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.211 [2024-12-07 08:14:09.472457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.211 [2024-12-07 08:14:09.472486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.211 [2024-12-07 08:14:09.483919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.211 [2024-12-07 08:14:09.483972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.211 [2024-12-07 08:14:09.484001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.470 [2024-12-07 08:14:09.495025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.470 [2024-12-07 08:14:09.495076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.470 [2024-12-07 08:14:09.495104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.471 [2024-12-07 08:14:09.506593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.471 [2024-12-07 08:14:09.506643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.471 [2024-12-07 08:14:09.506671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.471 [2024-12-07 08:14:09.517940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.471 [2024-12-07 08:14:09.517994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.471 [2024-12-07 08:14:09.518038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.471 [2024-12-07 08:14:09.527740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.471 [2024-12-07 08:14:09.527790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.471 [2024-12-07 08:14:09.527818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.471 [2024-12-07 08:14:09.540790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.471 [2024-12-07 08:14:09.540841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.471 [2024-12-07 08:14:09.540870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.471 [2024-12-07 08:14:09.552976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.471 [2024-12-07 08:14:09.553027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.471 [2024-12-07 08:14:09.553055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.471 [2024-12-07 08:14:09.567343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.471 [2024-12-07 08:14:09.567381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.471 [2024-12-07 08:14:09.567395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.471 [2024-12-07 08:14:09.581441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.471 [2024-12-07 08:14:09.581517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.471 [2024-12-07 08:14:09.581547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.471 [2024-12-07 08:14:09.592487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.471 [2024-12-07 08:14:09.592540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.471 [2024-12-07 08:14:09.592586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.471 [2024-12-07 08:14:09.606954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.471 [2024-12-07 08:14:09.607007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.471 [2024-12-07 08:14:09.607036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.471 [2024-12-07 08:14:09.618286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.471 [2024-12-07 08:14:09.618335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.471 [2024-12-07 08:14:09.618364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.471 [2024-12-07 08:14:09.629394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.471 [2024-12-07 08:14:09.629432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.471 [2024-12-07 08:14:09.629446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.471 [2024-12-07 08:14:09.640154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.471 [2024-12-07 08:14:09.640247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.471 [2024-12-07 08:14:09.640262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.471 [2024-12-07 08:14:09.649850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.471 [2024-12-07 08:14:09.649903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.471 [2024-12-07 08:14:09.649932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.471 [2024-12-07 08:14:09.662623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.471 [2024-12-07 08:14:09.662675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.471 [2024-12-07 08:14:09.662703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.471 [2024-12-07 08:14:09.676614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.471 [2024-12-07 08:14:09.676664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.471 [2024-12-07 08:14:09.676694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.471 [2024-12-07 08:14:09.690301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.471 [2024-12-07 08:14:09.690363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.471 [2024-12-07 08:14:09.690393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.471 [2024-12-07 08:14:09.704480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.471 [2024-12-07 08:14:09.704532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.471 [2024-12-07 08:14:09.704562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.471 [2024-12-07 08:14:09.717111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.471 [2024-12-07 08:14:09.717165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.471 [2024-12-07 08:14:09.717195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.471 [2024-12-07 08:14:09.727005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.471 [2024-12-07 08:14:09.727057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.471 [2024-12-07 08:14:09.727086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.471 [2024-12-07 08:14:09.739191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.471 [2024-12-07 08:14:09.739254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.471 [2024-12-07 08:14:09.739285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.731 [2024-12-07 08:14:09.750063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.731 [2024-12-07 08:14:09.750115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.731 [2024-12-07 08:14:09.750158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.731 [2024-12-07 08:14:09.763761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.731 [2024-12-07 08:14:09.763813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.731 [2024-12-07 08:14:09.763842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.731 [2024-12-07 08:14:09.775478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.731 [2024-12-07 08:14:09.775529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.731 [2024-12-07 08:14:09.775558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.731 [2024-12-07 08:14:09.785061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.731 [2024-12-07 08:14:09.785112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.731 [2024-12-07 08:14:09.785140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.731 [2024-12-07 08:14:09.798324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.731 [2024-12-07 08:14:09.798373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.731 [2024-12-07 08:14:09.798401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.731 [2024-12-07 08:14:09.809760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.731 [2024-12-07 08:14:09.809798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.731 [2024-12-07 08:14:09.809828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.731 [2024-12-07 08:14:09.819620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.731 [2024-12-07 08:14:09.819671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.731 [2024-12-07 08:14:09.819700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.731 [2024-12-07 08:14:09.830767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.731 [2024-12-07 08:14:09.830817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.731 [2024-12-07 08:14:09.830845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.731 [2024-12-07 08:14:09.842298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.731 [2024-12-07 08:14:09.842358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.731 [2024-12-07 08:14:09.842388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.731 [2024-12-07 08:14:09.856227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.731 [2024-12-07 08:14:09.856278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.731 [2024-12-07 08:14:09.856308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.731 [2024-12-07 08:14:09.865252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.731 [2024-12-07 08:14:09.865303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.731 [2024-12-07 08:14:09.865331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.731 [2024-12-07 08:14:09.879033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.731 [2024-12-07 08:14:09.879084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.731 [2024-12-07 08:14:09.879113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.731 [2024-12-07 08:14:09.890704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.731 [2024-12-07 08:14:09.890755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.731 [2024-12-07 08:14:09.890784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.731 [2024-12-07 08:14:09.902851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.731 [2024-12-07 08:14:09.902901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.731 [2024-12-07 08:14:09.902930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.731 [2024-12-07 08:14:09.913160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.731 [2024-12-07 08:14:09.913235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.731 [2024-12-07 08:14:09.913249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.731 [2024-12-07 08:14:09.926725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.731 [2024-12-07 08:14:09.926776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.731 [2024-12-07 08:14:09.926805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.731 [2024-12-07 08:14:09.940042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.732 [2024-12-07 08:14:09.940075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.732 [2024-12-07 08:14:09.940104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.732 [2024-12-07 08:14:09.955670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.732 [2024-12-07 08:14:09.955723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.732 [2024-12-07 08:14:09.955753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.732 [2024-12-07 08:14:09.966297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.732 [2024-12-07 08:14:09.966334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.732 [2024-12-07 08:14:09.966364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.732 [2024-12-07 08:14:09.980638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.732 [2024-12-07 08:14:09.980689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.732 [2024-12-07 08:14:09.980717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.732 [2024-12-07 08:14:09.994440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.732 [2024-12-07 08:14:09.994489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.732 [2024-12-07 08:14:09.994518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.991 [2024-12-07 08:14:10.008473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.991 [2024-12-07 08:14:10.008542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.991 [2024-12-07 08:14:10.008571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.991 [2024-12-07 08:14:10.020314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.991 [2024-12-07 08:14:10.020366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.991 [2024-12-07 08:14:10.020394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.991 [2024-12-07 08:14:10.032014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.991 [2024-12-07 08:14:10.032065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.991 [2024-12-07 08:14:10.032094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.991 [2024-12-07 08:14:10.044291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.991 [2024-12-07 08:14:10.044341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.991 [2024-12-07 08:14:10.044368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.991 [2024-12-07 08:14:10.056854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.991 [2024-12-07 08:14:10.056906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.991 [2024-12-07 08:14:10.056934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.991 [2024-12-07 08:14:10.066717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.991 [2024-12-07 08:14:10.066768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.991 [2024-12-07 08:14:10.066797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.991 [2024-12-07 08:14:10.079584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.991 [2024-12-07 08:14:10.079650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.991 [2024-12-07 08:14:10.079678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.991 [2024-12-07 08:14:10.092890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.991 [2024-12-07 08:14:10.092941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.991 [2024-12-07 08:14:10.092970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.991 [2024-12-07 08:14:10.106929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.991 [2024-12-07 08:14:10.106981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.991 [2024-12-07 08:14:10.107010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.991 [2024-12-07 08:14:10.118622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.991 [2024-12-07 08:14:10.118673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.991 [2024-12-07 08:14:10.118702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.991 [2024-12-07 08:14:10.128395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.991 [2024-12-07 08:14:10.128446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.991 [2024-12-07 08:14:10.128475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.991 [2024-12-07 08:14:10.138424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.991 [2024-12-07 08:14:10.138474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.991 [2024-12-07 08:14:10.138503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.991 [2024-12-07 08:14:10.149177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.991 [2024-12-07 08:14:10.149237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.991 [2024-12-07 08:14:10.149266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.991 [2024-12-07 08:14:10.159311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.991 [2024-12-07 08:14:10.159361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.991 [2024-12-07 08:14:10.159389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.991 [2024-12-07 08:14:10.169164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.991 [2024-12-07 08:14:10.169242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.991 [2024-12-07 08:14:10.169256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.991 [2024-12-07 08:14:10.179716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.991 [2024-12-07 08:14:10.179766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.992 [2024-12-07 08:14:10.179795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.992 [2024-12-07 08:14:10.190306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.992 [2024-12-07 08:14:10.190356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.992 [2024-12-07 08:14:10.190384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.992 [2024-12-07 08:14:10.204025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.992 [2024-12-07 08:14:10.204077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.992 [2024-12-07 08:14:10.204105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.992 [2024-12-07 08:14:10.215764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.992 [2024-12-07 08:14:10.215815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.992 [2024-12-07 08:14:10.215844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.992 [2024-12-07 08:14:10.228379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.992 [2024-12-07 08:14:10.228429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.992 [2024-12-07 08:14:10.228458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.992 [2024-12-07 08:14:10.240977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.992 [2024-12-07 08:14:10.241027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.992 [2024-12-07 08:14:10.241056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:58.992 [2024-12-07 08:14:10.254610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:58.992 [2024-12-07 08:14:10.254661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.992 [2024-12-07 08:14:10.254689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.251 [2024-12-07 08:14:10.267741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.251 [2024-12-07 08:14:10.267794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.251 [2024-12-07 08:14:10.267823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.251 [2024-12-07 08:14:10.277142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.251 [2024-12-07 08:14:10.277192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.251 [2024-12-07 08:14:10.277243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.251 [2024-12-07 08:14:10.290177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.251 [2024-12-07 08:14:10.290236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.251 [2024-12-07 08:14:10.290264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.251 [2024-12-07 08:14:10.302366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.251 [2024-12-07 08:14:10.302416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.252 [2024-12-07 08:14:10.302445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.252 [2024-12-07 08:14:10.311488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.252 [2024-12-07 08:14:10.311538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.252 [2024-12-07 08:14:10.311566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.252 [2024-12-07 08:14:10.321625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.252 [2024-12-07 08:14:10.321701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.252 [2024-12-07 08:14:10.321731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.252 [2024-12-07 08:14:10.332690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.252 [2024-12-07 08:14:10.332740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.252 [2024-12-07 08:14:10.332768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.252 [2024-12-07 08:14:10.343286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.252 [2024-12-07 08:14:10.343336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.252 [2024-12-07 08:14:10.343364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.252 [2024-12-07 08:14:10.355707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.252 [2024-12-07 08:14:10.355758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.252 [2024-12-07 08:14:10.355786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.252 [2024-12-07 08:14:10.367998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.252 [2024-12-07 08:14:10.368049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.252 [2024-12-07 08:14:10.368077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.252 [2024-12-07 08:14:10.379061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.252 [2024-12-07 08:14:10.379111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.252 [2024-12-07 08:14:10.379140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.252 [2024-12-07 08:14:10.390767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.252 [2024-12-07 08:14:10.390817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.252 [2024-12-07 08:14:10.390845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.252 [2024-12-07 08:14:10.400745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.252 [2024-12-07 08:14:10.400796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.252 [2024-12-07 08:14:10.400824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.252 [2024-12-07 08:14:10.412588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.252 [2024-12-07 08:14:10.412655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.252 [2024-12-07 08:14:10.412683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.252 [2024-12-07 08:14:10.425643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.252 [2024-12-07 08:14:10.425716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.252 [2024-12-07 08:14:10.425761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.252 [2024-12-07 08:14:10.439233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.252 [2024-12-07 08:14:10.439282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.252 [2024-12-07 08:14:10.439310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.252 [2024-12-07 08:14:10.451061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.252 [2024-12-07 08:14:10.451127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.252 [2024-12-07 08:14:10.451156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.252 [2024-12-07 08:14:10.460677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.252 [2024-12-07 08:14:10.460728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.252 [2024-12-07 08:14:10.460772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.252 [2024-12-07 08:14:10.471737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.252 [2024-12-07 08:14:10.471787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.252 [2024-12-07 08:14:10.471816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.252 [2024-12-07 08:14:10.481851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.252 [2024-12-07 08:14:10.481905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.252 [2024-12-07 08:14:10.481935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.252 [2024-12-07 08:14:10.492363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.252 [2024-12-07 08:14:10.492412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.252 [2024-12-07 08:14:10.492441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.252 [2024-12-07 08:14:10.503320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.252 [2024-12-07 08:14:10.503370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.252 [2024-12-07 08:14:10.503398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.252 [2024-12-07 08:14:10.514360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.252 [2024-12-07 08:14:10.514411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.252 [2024-12-07 08:14:10.514439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.512 [2024-12-07 08:14:10.525933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.512 [2024-12-07 08:14:10.526003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.512 [2024-12-07 08:14:10.526033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.513 [2024-12-07 08:14:10.536746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.513 [2024-12-07 08:14:10.536797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.513 [2024-12-07 08:14:10.536825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.513 [2024-12-07 08:14:10.548310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.513 [2024-12-07 08:14:10.548360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.513 [2024-12-07 08:14:10.548388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.513 [2024-12-07 08:14:10.559728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.513 [2024-12-07 08:14:10.559780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.513 [2024-12-07 08:14:10.559808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.513 [2024-12-07 08:14:10.570620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.513 [2024-12-07 08:14:10.570670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.513 [2024-12-07 08:14:10.570698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.513 [2024-12-07 08:14:10.583032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.513 [2024-12-07 08:14:10.583084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.513 [2024-12-07 08:14:10.583113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.513 [2024-12-07 08:14:10.592039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.513 [2024-12-07 08:14:10.592089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.513 [2024-12-07 08:14:10.592117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.513 [2024-12-07 08:14:10.602097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.513 [2024-12-07 08:14:10.602163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.513 [2024-12-07 08:14:10.602192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.513 [2024-12-07 08:14:10.612830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.513 [2024-12-07 08:14:10.612880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.513 [2024-12-07 08:14:10.612908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.513 [2024-12-07 08:14:10.626708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.513 [2024-12-07 08:14:10.626759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.513 [2024-12-07 08:14:10.626787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.513 [2024-12-07 08:14:10.639594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.513 [2024-12-07 08:14:10.639645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.513 [2024-12-07 08:14:10.639673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.513 [2024-12-07 08:14:10.650575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.513 [2024-12-07 08:14:10.650626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.513 [2024-12-07 08:14:10.650654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.513 [2024-12-07 08:14:10.660506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.513 [2024-12-07 08:14:10.660556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.513 [2024-12-07 08:14:10.660584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.513 [2024-12-07 08:14:10.670573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.513 [2024-12-07 08:14:10.670623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.513 [2024-12-07 08:14:10.670651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.513 [2024-12-07 08:14:10.682606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.513 [2024-12-07 08:14:10.682656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.513 [2024-12-07 08:14:10.682684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.513 [2024-12-07 08:14:10.695100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.513 [2024-12-07 08:14:10.695152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.513 [2024-12-07 08:14:10.695181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.513 [2024-12-07 08:14:10.705007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.513 [2024-12-07 08:14:10.705058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.513 [2024-12-07 08:14:10.705086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.513 [2024-12-07 08:14:10.716383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.513 [2024-12-07 08:14:10.716434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.513 [2024-12-07 08:14:10.716463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.513 [2024-12-07 08:14:10.727778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.513 [2024-12-07 08:14:10.727828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.513 [2024-12-07 08:14:10.727856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.513 [2024-12-07 08:14:10.737775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.513 [2024-12-07 08:14:10.737827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.513 [2024-12-07 08:14:10.737856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.513 [2024-12-07 08:14:10.750583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.513 [2024-12-07 08:14:10.750634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.513 [2024-12-07 08:14:10.750662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.513 [2024-12-07 08:14:10.760301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.513 [2024-12-07 08:14:10.760351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.514 [2024-12-07 08:14:10.760379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.514 [2024-12-07 08:14:10.772720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.514 [2024-12-07 08:14:10.772772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.514 [2024-12-07 08:14:10.772801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.514 [2024-12-07 08:14:10.786432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.514 [2024-12-07 08:14:10.786471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.514 [2024-12-07 08:14:10.786485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.774 [2024-12-07 08:14:10.802940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.774 [2024-12-07 08:14:10.802993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.774 [2024-12-07 08:14:10.803021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.774 [2024-12-07 08:14:10.812589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.774 [2024-12-07 08:14:10.812640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.774 [2024-12-07 08:14:10.812668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.774 [2024-12-07 08:14:10.825806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.774 [2024-12-07 08:14:10.825860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.774 [2024-12-07 08:14:10.825890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.774 [2024-12-07 08:14:10.838324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.774 [2024-12-07 08:14:10.838361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.774 [2024-12-07 08:14:10.838390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.774 [2024-12-07 08:14:10.851532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.774 [2024-12-07 08:14:10.851599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.774 [2024-12-07 08:14:10.851628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.774 [2024-12-07 08:14:10.865295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.774 [2024-12-07 08:14:10.865346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.774 [2024-12-07 08:14:10.865375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.774 [2024-12-07 08:14:10.878507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.774 [2024-12-07 08:14:10.878557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.774 [2024-12-07 08:14:10.878586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.774 [2024-12-07 08:14:10.891519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.774 [2024-12-07 08:14:10.891572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.774 [2024-12-07 08:14:10.891600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.774 [2024-12-07 08:14:10.904816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.774 [2024-12-07 08:14:10.904867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.775 [2024-12-07 08:14:10.904896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.775 [2024-12-07 08:14:10.917371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.775 [2024-12-07 08:14:10.917424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.775 [2024-12-07 08:14:10.917454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.775 [2024-12-07 08:14:10.926939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.775 [2024-12-07 08:14:10.926992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.775 [2024-12-07 08:14:10.927020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.775 [2024-12-07 08:14:10.940329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12ca8d0) 00:22:59.775 [2024-12-07 08:14:10.940379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.775 [2024-12-07 08:14:10.940408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:59.775 00:22:59.775 Latency(us) 00:22:59.775 [2024-12-07T08:14:11.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.775 [2024-12-07T08:14:11.051Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:59.775 nvme0n1 : 2.00 21600.97 84.38 0.00 0.00 5920.22 2204.39 19422.49 00:22:59.775 [2024-12-07T08:14:11.051Z] =================================================================================================================== 00:22:59.775 [2024-12-07T08:14:11.051Z] Total : 21600.97 84.38 0.00 0.00 5920.22 2204.39 19422.49 00:22:59.775 0 00:22:59.775 08:14:10 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:59.775 08:14:10 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:59.775 08:14:10 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:59.775 | .driver_specific 00:22:59.775 | .nvme_error 00:22:59.775 | .status_code 00:22:59.775 | .command_transient_transport_error' 00:22:59.775 08:14:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:00.034 08:14:11 -- host/digest.sh@71 -- # (( 169 > 0 )) 00:23:00.034 08:14:11 -- host/digest.sh@73 -- # killprocess 97802 00:23:00.034 08:14:11 -- common/autotest_common.sh@936 -- # '[' -z 97802 ']' 00:23:00.034 08:14:11 -- common/autotest_common.sh@940 -- # kill -0 97802 00:23:00.034 08:14:11 -- common/autotest_common.sh@941 -- # uname 00:23:00.034 08:14:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:00.034 08:14:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97802 00:23:00.034 killing process with pid 97802 00:23:00.034 Received shutdown signal, test time was about 2.000000 seconds 00:23:00.034 00:23:00.034 Latency(us) 00:23:00.034 [2024-12-07T08:14:11.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.034 [2024-12-07T08:14:11.310Z] =================================================================================================================== 00:23:00.034 [2024-12-07T08:14:11.310Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:00.034 08:14:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:00.034 08:14:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:00.034 08:14:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97802' 00:23:00.034 08:14:11 -- common/autotest_common.sh@955 -- # kill 97802 00:23:00.034 08:14:11 -- common/autotest_common.sh@960 -- # wait 97802 00:23:00.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:00.293 08:14:11 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:23:00.293 08:14:11 -- host/digest.sh@54 -- # local rw bs qd 00:23:00.293 08:14:11 -- host/digest.sh@56 -- # rw=randread 00:23:00.293 08:14:11 -- host/digest.sh@56 -- # bs=131072 00:23:00.293 08:14:11 -- host/digest.sh@56 -- # qd=16 00:23:00.293 08:14:11 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:00.293 08:14:11 -- host/digest.sh@58 -- # bperfpid=97891 00:23:00.293 08:14:11 -- host/digest.sh@60 -- # waitforlisten 97891 /var/tmp/bperf.sock 00:23:00.293 08:14:11 -- common/autotest_common.sh@829 -- # '[' -z 97891 ']' 00:23:00.293 08:14:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:00.293 08:14:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:00.293 08:14:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:00.293 08:14:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:00.293 08:14:11 -- common/autotest_common.sh@10 -- # set +x 00:23:00.293 [2024-12-07 08:14:11.505516] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:00.293 [2024-12-07 08:14:11.505811] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97891 ] 00:23:00.293 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:00.293 Zero copy mechanism will not be used. 00:23:00.564 [2024-12-07 08:14:11.639357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.564 [2024-12-07 08:14:11.703134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.518 08:14:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.518 08:14:12 -- common/autotest_common.sh@862 -- # return 0 00:23:01.518 08:14:12 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:01.518 08:14:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:01.518 08:14:12 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:01.518 08:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.518 08:14:12 -- common/autotest_common.sh@10 -- # set +x 00:23:01.518 08:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.518 08:14:12 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:01.518 08:14:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:02.088 nvme0n1 00:23:02.088 08:14:13 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:02.088 08:14:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.088 08:14:13 -- common/autotest_common.sh@10 -- # set +x 00:23:02.088 08:14:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.088 08:14:13 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:02.088 08:14:13 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:02.088 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:02.088 Zero copy mechanism will not be used. 00:23:02.088 Running I/O for 2 seconds... 00:23:02.088 [2024-12-07 08:14:13.191998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.088 [2024-12-07 08:14:13.192058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.192071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.196026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.196073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.196084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.199733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.199781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.199793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.204036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.204084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.204096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.207991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.208054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.208066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.211024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.211089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.211101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.214983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.215030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.215042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.218496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.218543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.218554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.222383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.222433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.222445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.226189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.226273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.226285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.229595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.229642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.229653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.233344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.233392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.233404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.236728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.236775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.236786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.240478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.240526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.240538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.244154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.244201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.244221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.248222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.248280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.248292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.251176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.251233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.251244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.254501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.254550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.254566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.257656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.257726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.257738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.261859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.261892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.261903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.265096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.265143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.265153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.268923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.268970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.268981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.272835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.272883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.272894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.276275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.276322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.276334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.279417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.279465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.279476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.282732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.089 [2024-12-07 08:14:13.282780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.089 [2024-12-07 08:14:13.282791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.089 [2024-12-07 08:14:13.285832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.285880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.285891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.289360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.289408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.289419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.292817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.292865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.292877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.296171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.296229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.296242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.299712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.299760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.299771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.303148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.303195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.303206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.306642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.306676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.306688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.310514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.310560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.310572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.313425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.313471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.313482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.317050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.317097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.317108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.320390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.320436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.320448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.323552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.323598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.323609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.327222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.327277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.327289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.330172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.330229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.330242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.333039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.333084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.333095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.336083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.336131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.336142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.340029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.340077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.340088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.343399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.343447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.343458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.346976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.347023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.347034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.350228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.350284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.350296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.353595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.353641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.353652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.356899] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.356964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.356976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.090 [2024-12-07 08:14:13.360649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.090 [2024-12-07 08:14:13.360697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.090 [2024-12-07 08:14:13.360709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.353 [2024-12-07 08:14:13.363954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.353 [2024-12-07 08:14:13.364003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.353 [2024-12-07 08:14:13.364014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.353 [2024-12-07 08:14:13.367655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.353 [2024-12-07 08:14:13.367706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.353 [2024-12-07 08:14:13.367719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.353 [2024-12-07 08:14:13.371300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.353 [2024-12-07 08:14:13.371346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.353 [2024-12-07 08:14:13.371358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.353 [2024-12-07 08:14:13.374663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.353 [2024-12-07 08:14:13.374711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.353 [2024-12-07 08:14:13.374723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.353 [2024-12-07 08:14:13.378054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.353 [2024-12-07 08:14:13.378104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.353 [2024-12-07 08:14:13.378131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.353 [2024-12-07 08:14:13.381643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.353 [2024-12-07 08:14:13.381717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.353 [2024-12-07 08:14:13.381730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.353 [2024-12-07 08:14:13.385591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.353 [2024-12-07 08:14:13.385625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.353 [2024-12-07 08:14:13.385638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.353 [2024-12-07 08:14:13.389127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.353 [2024-12-07 08:14:13.389175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.389187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.392518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.392567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.392579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.395617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.395665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.395676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.399228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.399275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.399286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.402664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.402711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.402722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.406641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.406689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.406700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.409967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.410016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.410042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.413466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.413515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.413525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.417242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.417289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.417301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.420408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.420454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.420465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.423654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.423703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.423715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.427326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.427373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.427384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.430678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.430724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.430734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.434561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.434608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.434620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.437876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.437924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.437936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.441554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.441601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.441623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.444702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.444748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.444760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.448296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.448342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.448352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.451365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.451413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.451423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.454582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.454630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.454641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.457929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.457963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.457976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.461582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.461641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.461652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.464856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.464901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.464912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.468064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.468111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.468122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.471675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.471720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.471731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.475190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.475247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.475258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.479027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.479074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.354 [2024-12-07 08:14:13.479085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.354 [2024-12-07 08:14:13.482635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.354 [2024-12-07 08:14:13.482681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.482693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.485878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.485925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.485936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.489400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.489448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.489458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.492538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.492584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.492595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.495567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.495613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.495624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.498744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.498791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.498802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.501736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.501783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.501795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.505112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.505157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.505167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.508953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.509000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.509010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.512685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.512732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.512744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.516091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.516138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.516149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.520088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.520136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.520147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.524102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.524149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.524160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.526892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.526939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.526950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.530433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.530480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.530491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.534047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.534093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.534118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.537319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.537363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.537374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.540559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.540621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.540632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.544017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.544063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.544074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.547647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.547694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.547704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.551402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.551448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.551459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.554590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.554635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.554646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.557781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.557813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.557826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.560952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.560998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.561008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.564287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.564333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.564344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.567533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.567579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.567589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.570495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.570541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.355 [2024-12-07 08:14:13.570552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.355 [2024-12-07 08:14:13.573218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.355 [2024-12-07 08:14:13.573272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.356 [2024-12-07 08:14:13.573284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.356 [2024-12-07 08:14:13.576695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.356 [2024-12-07 08:14:13.576742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.356 [2024-12-07 08:14:13.576752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.356 [2024-12-07 08:14:13.580156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.356 [2024-12-07 08:14:13.580202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.356 [2024-12-07 08:14:13.580225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.356 [2024-12-07 08:14:13.583556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.356 [2024-12-07 08:14:13.583602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.356 [2024-12-07 08:14:13.583628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.356 [2024-12-07 08:14:13.586718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.356 [2024-12-07 08:14:13.586765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.356 [2024-12-07 08:14:13.586777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.356 [2024-12-07 08:14:13.590291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.356 [2024-12-07 08:14:13.590337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.356 [2024-12-07 08:14:13.590348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.356 [2024-12-07 08:14:13.593385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.356 [2024-12-07 08:14:13.593431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.356 [2024-12-07 08:14:13.593442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.356 [2024-12-07 08:14:13.596901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.356 [2024-12-07 08:14:13.596947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.356 [2024-12-07 08:14:13.596958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.356 [2024-12-07 08:14:13.600465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.356 [2024-12-07 08:14:13.600512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.356 [2024-12-07 08:14:13.600523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.356 [2024-12-07 08:14:13.603890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.356 [2024-12-07 08:14:13.603937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.356 [2024-12-07 08:14:13.603949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.356 [2024-12-07 08:14:13.607117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.356 [2024-12-07 08:14:13.607164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.356 [2024-12-07 08:14:13.607175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.356 [2024-12-07 08:14:13.610833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.356 [2024-12-07 08:14:13.610879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.356 [2024-12-07 08:14:13.610890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.356 [2024-12-07 08:14:13.614366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.356 [2024-12-07 08:14:13.614410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.356 [2024-12-07 08:14:13.614421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.356 [2024-12-07 08:14:13.617891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.356 [2024-12-07 08:14:13.617923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.356 [2024-12-07 08:14:13.617934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.356 [2024-12-07 08:14:13.621799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.356 [2024-12-07 08:14:13.621848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.356 [2024-12-07 08:14:13.621860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.625458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.625519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.625530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.629085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.629130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.629142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.632865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.632911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.632923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.636260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.636305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.636316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.639774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.639821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.639832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.643220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.643265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.643276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.646753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.646800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.646811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.649947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.649979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.649991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.653461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.653508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.653519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.656767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.656814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.656824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.659807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.659853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.659863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.663192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.663247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.663259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.665984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.666045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.666056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.669429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.669475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.669486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.672708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.672756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.672767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.675829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.675875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.675886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.679487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.679533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.679543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.682778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.682824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.682834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.686568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.686614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.686624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.690220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.690276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.690287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.694143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.694189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.694200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.697843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.697892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.697903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.701449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.701494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.619 [2024-12-07 08:14:13.701505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.619 [2024-12-07 08:14:13.704748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.619 [2024-12-07 08:14:13.704793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.704803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.707553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.707600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.707626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.710760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.710806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.710816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.714347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.714392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.714403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.717900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.717947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.717958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.721419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.721465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.721475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.724889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.724936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.724947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.728515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.728562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.728573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.731762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.731807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.731819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.735373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.735418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.735429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.739031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.739078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.739089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.742577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.742639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.742650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.745578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.745623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.745634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.749143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.749188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.749199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.752284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.752328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.752339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.755996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.756041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.756052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.759548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.759594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.759619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.762755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.762800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.762811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.766198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.766252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.766263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.769976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.770008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.770019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.773658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.773728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.773741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.776847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.776892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.776903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.780281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.780327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.780338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.784031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.784076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.784087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.787259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.787304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.787315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.790542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.790588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.790599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.793937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.793985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.620 [2024-12-07 08:14:13.793997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.620 [2024-12-07 08:14:13.796630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.620 [2024-12-07 08:14:13.796694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.796705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.799689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.799734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.799744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.803836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.803882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.803893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.807323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.807369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.807381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.811169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.811240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.811252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.815322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.815368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.815380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.818398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.818444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.818455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.822475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.822521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.822533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.825968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.826032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.826059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.828854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.828901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.828912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.831805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.831851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.831862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.835386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.835433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.835443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.838644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.838690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.838701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.842358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.842404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.842415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.846009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.846071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.846083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.849610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.849657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.849668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.853546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.853593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.853616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.857697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.857759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.857771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.861542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.861589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.861612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.865081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.865127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.865139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.868896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.868942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.868953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.872663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.872708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.872736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.876331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.876378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.876388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.880120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.880166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.880177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.883887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.883933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.883944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.621 [2024-12-07 08:14:13.887441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.621 [2024-12-07 08:14:13.887489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.621 [2024-12-07 08:14:13.887500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.882 [2024-12-07 08:14:13.891234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.882 [2024-12-07 08:14:13.891293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.882 [2024-12-07 08:14:13.891306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.882 [2024-12-07 08:14:13.894776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.882 [2024-12-07 08:14:13.894822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.882 [2024-12-07 08:14:13.894834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.882 [2024-12-07 08:14:13.899168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.882 [2024-12-07 08:14:13.899228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.882 [2024-12-07 08:14:13.899241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.882 [2024-12-07 08:14:13.902271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.882 [2024-12-07 08:14:13.902324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.882 [2024-12-07 08:14:13.902336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.882 [2024-12-07 08:14:13.905786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.882 [2024-12-07 08:14:13.905821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.882 [2024-12-07 08:14:13.905833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.882 [2024-12-07 08:14:13.908697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.882 [2024-12-07 08:14:13.908744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.882 [2024-12-07 08:14:13.908755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.912560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.912606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.912617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.915878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.915925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.915935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.919492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.919541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.919552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.923097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.923144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.923155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.926811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.926857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.926868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.929902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.929953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.929965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.933424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.933470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.933481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.936632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.936678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.936688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.940334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.940380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.940391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.943766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.943812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.943823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.947186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.947275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.947287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.950528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.950573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.950583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.953500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.953546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.953556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.956831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.956877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.956888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.959985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.960030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.960041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.963648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.963694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.963705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.966615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.966660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.966671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.969551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.969597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.969608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.973428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.973474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.973485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.976184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.976240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.976252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.979663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.979709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.979719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.983921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.983969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.983980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.986849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.986896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.986907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.990318] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.990363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.990373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.993487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.993532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.883 [2024-12-07 08:14:13.993543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.883 [2024-12-07 08:14:13.997013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.883 [2024-12-07 08:14:13.997059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:13.997070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.000799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.000847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.000858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.004803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.004850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.004861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.008419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.008465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.008476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.011788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.011834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.011844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.014686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.014731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.014742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.018241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.018296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.018308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.021851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.021884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.021896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.025811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.025844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.025855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.028962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.029008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.029019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.032972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.033035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.033046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.036779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.036827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.036839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.040907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.040954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.040966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.045247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.045306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.045318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.048426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.048460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.048472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.052323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.052371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.052383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.056190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.056282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.056295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.060487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.060536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.060548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.064059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.064105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.064115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.068047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.068094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.068104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.071563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.071609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.071635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.074791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.074836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.074846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.077913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.077961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.077972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.081394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.081439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.081450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.084991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.085037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.085048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.088706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.088752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.088763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.091702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.884 [2024-12-07 08:14:14.091748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.884 [2024-12-07 08:14:14.091759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.884 [2024-12-07 08:14:14.095530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.885 [2024-12-07 08:14:14.095578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.885 [2024-12-07 08:14:14.095590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.885 [2024-12-07 08:14:14.098810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.885 [2024-12-07 08:14:14.098856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.885 [2024-12-07 08:14:14.098866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.885 [2024-12-07 08:14:14.102289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.885 [2024-12-07 08:14:14.102334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.885 [2024-12-07 08:14:14.102345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.885 [2024-12-07 08:14:14.106144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.885 [2024-12-07 08:14:14.106190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.885 [2024-12-07 08:14:14.106202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.885 [2024-12-07 08:14:14.109875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.885 [2024-12-07 08:14:14.109924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.885 [2024-12-07 08:14:14.109937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.885 [2024-12-07 08:14:14.113887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.885 [2024-12-07 08:14:14.113937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.885 [2024-12-07 08:14:14.113949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.885 [2024-12-07 08:14:14.117307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.885 [2024-12-07 08:14:14.117353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.885 [2024-12-07 08:14:14.117364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.885 [2024-12-07 08:14:14.121076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.885 [2024-12-07 08:14:14.121123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.885 [2024-12-07 08:14:14.121134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.885 [2024-12-07 08:14:14.124802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.885 [2024-12-07 08:14:14.124848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.885 [2024-12-07 08:14:14.124859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.885 [2024-12-07 08:14:14.128095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.885 [2024-12-07 08:14:14.128140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.885 [2024-12-07 08:14:14.128151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.885 [2024-12-07 08:14:14.131750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.885 [2024-12-07 08:14:14.131796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.885 [2024-12-07 08:14:14.131807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.885 [2024-12-07 08:14:14.135095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.885 [2024-12-07 08:14:14.135142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.885 [2024-12-07 08:14:14.135153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.885 [2024-12-07 08:14:14.138621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.885 [2024-12-07 08:14:14.138666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.885 [2024-12-07 08:14:14.138677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:02.885 [2024-12-07 08:14:14.142245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.885 [2024-12-07 08:14:14.142299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.885 [2024-12-07 08:14:14.142310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:02.885 [2024-12-07 08:14:14.145534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.885 [2024-12-07 08:14:14.145580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.885 [2024-12-07 08:14:14.145606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:02.885 [2024-12-07 08:14:14.148643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.885 [2024-12-07 08:14:14.148690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.885 [2024-12-07 08:14:14.148701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:02.885 [2024-12-07 08:14:14.152700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:02.885 [2024-12-07 08:14:14.152747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:02.885 [2024-12-07 08:14:14.152758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.147 [2024-12-07 08:14:14.157000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.147 [2024-12-07 08:14:14.157048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-12-07 08:14:14.157059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.147 [2024-12-07 08:14:14.160805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.147 [2024-12-07 08:14:14.160852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-12-07 08:14:14.160863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.147 [2024-12-07 08:14:14.164365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.147 [2024-12-07 08:14:14.164412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-12-07 08:14:14.164423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.147 [2024-12-07 08:14:14.167785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.147 [2024-12-07 08:14:14.167831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-12-07 08:14:14.167842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.147 [2024-12-07 08:14:14.170801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.147 [2024-12-07 08:14:14.170847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-12-07 08:14:14.170857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.147 [2024-12-07 08:14:14.174539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.147 [2024-12-07 08:14:14.174586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-12-07 08:14:14.174597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.147 [2024-12-07 08:14:14.178019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.147 [2024-12-07 08:14:14.178097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-12-07 08:14:14.178123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.147 [2024-12-07 08:14:14.181225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.147 [2024-12-07 08:14:14.181280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-12-07 08:14:14.181292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.147 [2024-12-07 08:14:14.184658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.147 [2024-12-07 08:14:14.184705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-12-07 08:14:14.184716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.147 [2024-12-07 08:14:14.188406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.147 [2024-12-07 08:14:14.188451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-12-07 08:14:14.188462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.147 [2024-12-07 08:14:14.191242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.147 [2024-12-07 08:14:14.191287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-12-07 08:14:14.191299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.147 [2024-12-07 08:14:14.194855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.147 [2024-12-07 08:14:14.194901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-12-07 08:14:14.194911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.147 [2024-12-07 08:14:14.198328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.147 [2024-12-07 08:14:14.198358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-12-07 08:14:14.198370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.147 [2024-12-07 08:14:14.202147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.147 [2024-12-07 08:14:14.202193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-12-07 08:14:14.202204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.147 [2024-12-07 08:14:14.205565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.147 [2024-12-07 08:14:14.205611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-12-07 08:14:14.205622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.147 [2024-12-07 08:14:14.209472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.147 [2024-12-07 08:14:14.209519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-12-07 08:14:14.209530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.147 [2024-12-07 08:14:14.212705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.147 [2024-12-07 08:14:14.212752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-12-07 08:14:14.212764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.147 [2024-12-07 08:14:14.216391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.147 [2024-12-07 08:14:14.216437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-12-07 08:14:14.216448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.147 [2024-12-07 08:14:14.220345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.147 [2024-12-07 08:14:14.220393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.147 [2024-12-07 08:14:14.220404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.148 [2024-12-07 08:14:14.224192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.148 [2024-12-07 08:14:14.224248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-12-07 08:14:14.224260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.148 [2024-12-07 08:14:14.228235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.148 [2024-12-07 08:14:14.228279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-12-07 08:14:14.228289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.148 [2024-12-07 08:14:14.232144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.148 [2024-12-07 08:14:14.232191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-12-07 08:14:14.232201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.148 [2024-12-07 08:14:14.235855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.148 [2024-12-07 08:14:14.235901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-12-07 08:14:14.235912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.148 [2024-12-07 08:14:14.239948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.148 [2024-12-07 08:14:14.239995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-12-07 08:14:14.240006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.148 [2024-12-07 08:14:14.243739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.148 [2024-12-07 08:14:14.243785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-12-07 08:14:14.243795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.148 [2024-12-07 08:14:14.247721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.148 [2024-12-07 08:14:14.247768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-12-07 08:14:14.247779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.148 [2024-12-07 08:14:14.251485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.148 [2024-12-07 08:14:14.251530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-12-07 08:14:14.251542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.148 [2024-12-07 08:14:14.254596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.148 [2024-12-07 08:14:14.254643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-12-07 08:14:14.254654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.148 [2024-12-07 08:14:14.258439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.148 [2024-12-07 08:14:14.258485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-12-07 08:14:14.258496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.148 [2024-12-07 08:14:14.262023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.148 [2024-12-07 08:14:14.262086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-12-07 08:14:14.262111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.148 [2024-12-07 08:14:14.265415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.148 [2024-12-07 08:14:14.265462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-12-07 08:14:14.265473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.148 [2024-12-07 08:14:14.269311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.148 [2024-12-07 08:14:14.269358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-12-07 08:14:14.269369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.148 [2024-12-07 08:14:14.273024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.148 [2024-12-07 08:14:14.273071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-12-07 08:14:14.273081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.148 [2024-12-07 08:14:14.276445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.148 [2024-12-07 08:14:14.276491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-12-07 08:14:14.276502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.148 [2024-12-07 08:14:14.279284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.148 [2024-12-07 08:14:14.279329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-12-07 08:14:14.279340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.148 [2024-12-07 08:14:14.282713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.148 [2024-12-07 08:14:14.282759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-12-07 08:14:14.282770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.148 [2024-12-07 08:14:14.285931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.148 [2024-12-07 08:14:14.285963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-12-07 08:14:14.285974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.148 [2024-12-07 08:14:14.289548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.148 [2024-12-07 08:14:14.289594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-12-07 08:14:14.289616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.148 [2024-12-07 08:14:14.292773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.148 [2024-12-07 08:14:14.292818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.148 [2024-12-07 08:14:14.292829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.148 [2024-12-07 08:14:14.295906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.149 [2024-12-07 08:14:14.295952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-12-07 08:14:14.295963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.149 [2024-12-07 08:14:14.299530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.149 [2024-12-07 08:14:14.299575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-12-07 08:14:14.299586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.149 [2024-12-07 08:14:14.302738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.149 [2024-12-07 08:14:14.302785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-12-07 08:14:14.302796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.149 [2024-12-07 08:14:14.306600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.149 [2024-12-07 08:14:14.306647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-12-07 08:14:14.306659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.149 [2024-12-07 08:14:14.310209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.149 [2024-12-07 08:14:14.310266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-12-07 08:14:14.310294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.149 [2024-12-07 08:14:14.313937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.149 [2024-12-07 08:14:14.313985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-12-07 08:14:14.314011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.149 [2024-12-07 08:14:14.317435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.149 [2024-12-07 08:14:14.317484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-12-07 08:14:14.317496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.149 [2024-12-07 08:14:14.320428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.149 [2024-12-07 08:14:14.320473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-12-07 08:14:14.320484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.149 [2024-12-07 08:14:14.323980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.149 [2024-12-07 08:14:14.324027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-12-07 08:14:14.324038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.149 [2024-12-07 08:14:14.327713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.149 [2024-12-07 08:14:14.327760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-12-07 08:14:14.327770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.149 [2024-12-07 08:14:14.331545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.149 [2024-12-07 08:14:14.331592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-12-07 08:14:14.331603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.149 [2024-12-07 08:14:14.334986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.149 [2024-12-07 08:14:14.335032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-12-07 08:14:14.335043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.149 [2024-12-07 08:14:14.339036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.149 [2024-12-07 08:14:14.339082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-12-07 08:14:14.339093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.149 [2024-12-07 08:14:14.342966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.149 [2024-12-07 08:14:14.343013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-12-07 08:14:14.343024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.149 [2024-12-07 08:14:14.346891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.149 [2024-12-07 08:14:14.346938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-12-07 08:14:14.346948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.149 [2024-12-07 08:14:14.350423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.149 [2024-12-07 08:14:14.350469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-12-07 08:14:14.350480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.149 [2024-12-07 08:14:14.354185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.149 [2024-12-07 08:14:14.354254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-12-07 08:14:14.354266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.149 [2024-12-07 08:14:14.357743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.149 [2024-12-07 08:14:14.357789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-12-07 08:14:14.357800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.149 [2024-12-07 08:14:14.360245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.149 [2024-12-07 08:14:14.360288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-12-07 08:14:14.360299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.149 [2024-12-07 08:14:14.364046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.149 [2024-12-07 08:14:14.364091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.149 [2024-12-07 08:14:14.364102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.149 [2024-12-07 08:14:14.366845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.150 [2024-12-07 08:14:14.366891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-12-07 08:14:14.366902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.150 [2024-12-07 08:14:14.370343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.150 [2024-12-07 08:14:14.370387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-12-07 08:14:14.370398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.150 [2024-12-07 08:14:14.374229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.150 [2024-12-07 08:14:14.374285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-12-07 08:14:14.374296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.150 [2024-12-07 08:14:14.377569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.150 [2024-12-07 08:14:14.377631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-12-07 08:14:14.377641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.150 [2024-12-07 08:14:14.380449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.150 [2024-12-07 08:14:14.380495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-12-07 08:14:14.380505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.150 [2024-12-07 08:14:14.384553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.150 [2024-12-07 08:14:14.384599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-12-07 08:14:14.384610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.150 [2024-12-07 08:14:14.388111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.150 [2024-12-07 08:14:14.388157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-12-07 08:14:14.388168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.150 [2024-12-07 08:14:14.391701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.150 [2024-12-07 08:14:14.391747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-12-07 08:14:14.391758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.150 [2024-12-07 08:14:14.394876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.150 [2024-12-07 08:14:14.394922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-12-07 08:14:14.394933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.150 [2024-12-07 08:14:14.397648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.150 [2024-12-07 08:14:14.397719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-12-07 08:14:14.397731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.150 [2024-12-07 08:14:14.400836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.150 [2024-12-07 08:14:14.400881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-12-07 08:14:14.400892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.150 [2024-12-07 08:14:14.404539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.150 [2024-12-07 08:14:14.404586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-12-07 08:14:14.404597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.150 [2024-12-07 08:14:14.408327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.150 [2024-12-07 08:14:14.408373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-12-07 08:14:14.408385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.150 [2024-12-07 08:14:14.411695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.150 [2024-12-07 08:14:14.411742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-12-07 08:14:14.411753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.150 [2024-12-07 08:14:14.415171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.150 [2024-12-07 08:14:14.415233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-12-07 08:14:14.415246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.150 [2024-12-07 08:14:14.418971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.150 [2024-12-07 08:14:14.419017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.150 [2024-12-07 08:14:14.419028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.422816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.422862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.422872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.426503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.426550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.426577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.429907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.429941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.429953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.433484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.433531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.433542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.437073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.437121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.437133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.440897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.440944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.440956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.444092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.444139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.444150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.447947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.447994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.448005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.452042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.452089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.452100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.455747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.455793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.455805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.459124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.459170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.459181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.462691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.462738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.462749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.466054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.466101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.466112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.469565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.469612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.469623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.472538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.472585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.472596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.476636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.476683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.476694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.480191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.480246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.480258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.483618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.483666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.483677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.487300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.487346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.487358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.490744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.490792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.490803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.494340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.494387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.494415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.497773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.497807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.497820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.501439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.501486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.501497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.412 [2024-12-07 08:14:14.504682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.412 [2024-12-07 08:14:14.504728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.412 [2024-12-07 08:14:14.504739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.508483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.508529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.508540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.511530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.511577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.511588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.515405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.515453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.515480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.518542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.518603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.518615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.522787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.522835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.522845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.526476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.526521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.526533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.530321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.530368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.530379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.532823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.532869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.532881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.536138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.536183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.536193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.539634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.539679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.539690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.543438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.543485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.543496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.546640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.546687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.546697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.550312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.550358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.550368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.553758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.553808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.553821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.557265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.557312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.557323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.560540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.560588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.560614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.564315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.564362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.564373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.567374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.567420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.567431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.570883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.570930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.570942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.574617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.574664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.574675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.577932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.577980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.578022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.581513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.581559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.581570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.585007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.585054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.585065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.588173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.588231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.588243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.591862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.591908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.591919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.594802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.594849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.413 [2024-12-07 08:14:14.594861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.413 [2024-12-07 08:14:14.598985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.413 [2024-12-07 08:14:14.599032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.599043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.602766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.602813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.602825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.605948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.605995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.606007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.609448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.609496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.609507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.612671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.612706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.612718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.616296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.616342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.616353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.619660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.619707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.619718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.623099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.623145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.623156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.626137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.626183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.626193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.628988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.629034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.629045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.632872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.632919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.632930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.636060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.636107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.636118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.639519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.639565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.639577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.643034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.643081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.643093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.645865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.645912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.645923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.649740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.649788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.649799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.653606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.653653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.653664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.657506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.657553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.657564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.660787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.660833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.660844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.664058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.664104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.664115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.667929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.667975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.667986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.671740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.671786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.671797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.675034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.675079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.675090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.678636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.678681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.678692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.414 [2024-12-07 08:14:14.681914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.414 [2024-12-07 08:14:14.681948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.414 [2024-12-07 08:14:14.681960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.675 [2024-12-07 08:14:14.685821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.675 [2024-12-07 08:14:14.685855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.675 [2024-12-07 08:14:14.685868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.675 [2024-12-07 08:14:14.689029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.689075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.689086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.692350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.692414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.692441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.696203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.696258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.696270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.699175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.699233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.699245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.702455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.702501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.702512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.706118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.706164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.706175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.708991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.709036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.709047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.712159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.712206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.712243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.715324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.715370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.715381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.718783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.718830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.718841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.722275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.722320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.722331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.725480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.725526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.725537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.728902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.728947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.728957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.732153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.732199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.732235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.735569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.735615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.735643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.739235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.739280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.739290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.742560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.742606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.742617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.745613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.745697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.745725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.749316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.749362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.749372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.752498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.752546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.752557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.756330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.756377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.756389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.759628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.759673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.759683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.763094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.763140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.763151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.766171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.766228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.766240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.769426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.769473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.769484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.772991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.773022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.676 [2024-12-07 08:14:14.773049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.676 [2024-12-07 08:14:14.776448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.676 [2024-12-07 08:14:14.776495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.776506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.780058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.780105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.780116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.783347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.783392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.783403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.786449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.786494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.786505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.789789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.789837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.789848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.793004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.793050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.793060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.796451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.796497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.796509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.799985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.800031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.800042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.803473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.803519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.803530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.807079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.807125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.807136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.810652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.810698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.810708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.813971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.814018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.814044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.817243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.817289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.817299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.821017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.821064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.821076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.824719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.824766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.824777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.828476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.828522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.828533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.832111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.832158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.832169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.835877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.835924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.835936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.839595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.839642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.839653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.842642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.842689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.842700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.845810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.845859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.845871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.849117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.849164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.849175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.852306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.852351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.852361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.855778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.855823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.855834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.858867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.677 [2024-12-07 08:14:14.858913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.677 [2024-12-07 08:14:14.858924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.677 [2024-12-07 08:14:14.862692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.862738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.862750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.865814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.865861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.865873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.869180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.869252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.869265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.872816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.872862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.872873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.875958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.876004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.876015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.880255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.880301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.880313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.883437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.883483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.883493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.887207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.887250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.887261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.890751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.890796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.890807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.893837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.893885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.893897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.897349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.897394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.897405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.900907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.900953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.900964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.904271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.904317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.904328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.907931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.907975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.907986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.911418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.911464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.911475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.915021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.915066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.915077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.918536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.918581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.918592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.921946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.921977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.921987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.924953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.924998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.925009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.928109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.928156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.928167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.931955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.932001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.932011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.935185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.935224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.935252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.937984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.938062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.938073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.941332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.941378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.678 [2024-12-07 08:14:14.941390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.678 [2024-12-07 08:14:14.945285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.678 [2024-12-07 08:14:14.945344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.679 [2024-12-07 08:14:14.945370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.940 [2024-12-07 08:14:14.948682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.940 [2024-12-07 08:14:14.948730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.940 [2024-12-07 08:14:14.948742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.940 [2024-12-07 08:14:14.952585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.940 [2024-12-07 08:14:14.952632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.940 [2024-12-07 08:14:14.952643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.940 [2024-12-07 08:14:14.956136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.940 [2024-12-07 08:14:14.956184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.940 [2024-12-07 08:14:14.956195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.940 [2024-12-07 08:14:14.959696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.940 [2024-12-07 08:14:14.959742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.940 [2024-12-07 08:14:14.959753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.940 [2024-12-07 08:14:14.963076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.940 [2024-12-07 08:14:14.963123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.940 [2024-12-07 08:14:14.963134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.940 [2024-12-07 08:14:14.967171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.940 [2024-12-07 08:14:14.967231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.940 [2024-12-07 08:14:14.967243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.940 [2024-12-07 08:14:14.970456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.940 [2024-12-07 08:14:14.970502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.940 [2024-12-07 08:14:14.970513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.940 [2024-12-07 08:14:14.973219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.940 [2024-12-07 08:14:14.973274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.940 [2024-12-07 08:14:14.973286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.940 [2024-12-07 08:14:14.976772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.940 [2024-12-07 08:14:14.976817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.940 [2024-12-07 08:14:14.976829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.940 [2024-12-07 08:14:14.979805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.940 [2024-12-07 08:14:14.979850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.940 [2024-12-07 08:14:14.979861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.940 [2024-12-07 08:14:14.983319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.940 [2024-12-07 08:14:14.983365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.940 [2024-12-07 08:14:14.983376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.940 [2024-12-07 08:14:14.987064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.940 [2024-12-07 08:14:14.987111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:14.987121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:14.990090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:14.990153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:14.990164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:14.993064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:14.993109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:14.993121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:14.996426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:14.996473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:14.996484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.000308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.000354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.000365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.004110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.004156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.004167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.008061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.008107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.008119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.011359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.011405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.011416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.015008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.015054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.015065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.018702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.018748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.018759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.022352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.022397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.022407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.026049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.026097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.026124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.030011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.030059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.030085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.033165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.033235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.033248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.036722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.036767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.036778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.040096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.040141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.040151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.043494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.043539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.043550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.047084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.047130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.047141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.050958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.051005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.051016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.054786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.054831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.054842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.058555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.058618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.058629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.061836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.061870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.061882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.065809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.065844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.065856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.069490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.069523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.069536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.072988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.073033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.073044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.077451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.077501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.077513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.080601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.941 [2024-12-07 08:14:15.080680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.941 [2024-12-07 08:14:15.080691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.941 [2024-12-07 08:14:15.084690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.084737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.084747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.088719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.088765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.088776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.092032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.092078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.092090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.095317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.095365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.095377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.099335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.099381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.099392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.102874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.102919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.102930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.106756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.106803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.106814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.110956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.111003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.111015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.113887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.113937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.113949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.117375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.117420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.117430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.121308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.121352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.121362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.124752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.124797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.124807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.128702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.128748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.128759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.132434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.132482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.132494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.135289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.135335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.135345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.139105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.139151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.139162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.142396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.142443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.142453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.145913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.145961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.145973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.149517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.149564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.149575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.153385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.153432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.153442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.156303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.156349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.156361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.159294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.159339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.159349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.162809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.162854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.162864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.166437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.166482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.166492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.170147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.170192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.170203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.173984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.174031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.174042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.177299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.942 [2024-12-07 08:14:15.177344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.942 [2024-12-07 08:14:15.177355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:03.942 [2024-12-07 08:14:15.180515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x134ed10) 00:23:03.943 [2024-12-07 08:14:15.180561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:03.943 [2024-12-07 08:14:15.180572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:03.943 00:23:03.943 Latency(us) 00:23:03.943 [2024-12-07T08:14:15.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.943 [2024-12-07T08:14:15.219Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:03.943 nvme0n1 : 2.00 8825.12 1103.14 0.00 0.00 1809.86 629.29 8817.57 00:23:03.943 [2024-12-07T08:14:15.219Z] =================================================================================================================== 00:23:03.943 [2024-12-07T08:14:15.219Z] Total : 8825.12 1103.14 0.00 0.00 1809.86 629.29 8817.57 00:23:03.943 0 00:23:03.943 08:14:15 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:03.943 08:14:15 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:03.943 08:14:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:03.943 08:14:15 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:03.943 | .driver_specific 00:23:03.943 | .nvme_error 00:23:03.943 | .status_code 00:23:03.943 | .command_transient_transport_error' 00:23:04.508 08:14:15 -- host/digest.sh@71 -- # (( 569 > 0 )) 00:23:04.508 08:14:15 -- host/digest.sh@73 -- # killprocess 97891 00:23:04.508 08:14:15 -- common/autotest_common.sh@936 -- # '[' -z 97891 ']' 00:23:04.508 08:14:15 -- common/autotest_common.sh@940 -- # kill -0 97891 00:23:04.508 08:14:15 -- common/autotest_common.sh@941 -- # uname 00:23:04.508 08:14:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:04.508 08:14:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97891 00:23:04.508 08:14:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:04.508 08:14:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:04.508 08:14:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97891' 00:23:04.508 killing process with pid 97891 00:23:04.508 08:14:15 -- common/autotest_common.sh@955 -- # kill 97891 00:23:04.508 Received shutdown signal, test time was about 2.000000 seconds 00:23:04.508 00:23:04.508 Latency(us) 00:23:04.508 [2024-12-07T08:14:15.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.508 [2024-12-07T08:14:15.784Z] =================================================================================================================== 00:23:04.508 [2024-12-07T08:14:15.784Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:04.509 08:14:15 -- common/autotest_common.sh@960 -- # wait 97891 00:23:04.509 08:14:15 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:23:04.509 08:14:15 -- host/digest.sh@54 -- # local rw bs qd 00:23:04.509 08:14:15 -- host/digest.sh@56 -- # rw=randwrite 00:23:04.509 08:14:15 -- host/digest.sh@56 -- # bs=4096 00:23:04.509 08:14:15 -- host/digest.sh@56 -- # qd=128 00:23:04.509 08:14:15 -- host/digest.sh@58 -- # bperfpid=97980 00:23:04.509 08:14:15 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:04.509 08:14:15 -- host/digest.sh@60 -- # waitforlisten 97980 /var/tmp/bperf.sock 00:23:04.509 08:14:15 -- common/autotest_common.sh@829 -- # '[' -z 97980 ']' 00:23:04.509 08:14:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:04.509 08:14:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:04.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:04.509 08:14:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:04.509 08:14:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:04.509 08:14:15 -- common/autotest_common.sh@10 -- # set +x 00:23:04.509 [2024-12-07 08:14:15.756662] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:04.509 [2024-12-07 08:14:15.756759] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97980 ] 00:23:04.767 [2024-12-07 08:14:15.888301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.767 [2024-12-07 08:14:15.953962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.702 08:14:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:05.702 08:14:16 -- common/autotest_common.sh@862 -- # return 0 00:23:05.702 08:14:16 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:05.702 08:14:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:05.961 08:14:16 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:05.961 08:14:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.961 08:14:16 -- common/autotest_common.sh@10 -- # set +x 00:23:05.961 08:14:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.961 08:14:17 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:05.961 08:14:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:06.220 nvme0n1 00:23:06.220 08:14:17 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:06.220 08:14:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.220 08:14:17 -- common/autotest_common.sh@10 -- # set +x 00:23:06.220 08:14:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.220 08:14:17 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:06.220 08:14:17 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:06.220 Running I/O for 2 seconds... 00:23:06.220 [2024-12-07 08:14:17.478612] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190eea00 00:23:06.220 [2024-12-07 08:14:17.479553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.220 [2024-12-07 08:14:17.479600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:06.220 [2024-12-07 08:14:17.488973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ea680 00:23:06.220 [2024-12-07 08:14:17.489704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.220 [2024-12-07 08:14:17.489754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.500264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f31b8 00:23:06.480 [2024-12-07 08:14:17.500725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.500771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.510533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e99d8 00:23:06.480 [2024-12-07 08:14:17.510872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.510897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.520434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190eaef0 00:23:06.480 [2024-12-07 08:14:17.520766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.520791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.530282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e5220 00:23:06.480 [2024-12-07 08:14:17.530569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.530595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.539902] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ea680 00:23:06.480 [2024-12-07 08:14:17.540180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.540230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.549611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f2510 00:23:06.480 [2024-12-07 08:14:17.549974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.550016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.559452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190eee38 00:23:06.480 [2024-12-07 08:14:17.559803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.559827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.569207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e6b70 00:23:06.480 [2024-12-07 08:14:17.569588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.569613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.579726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e27f0 00:23:06.480 [2024-12-07 08:14:17.580229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.580273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.589920] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e27f0 00:23:06.480 [2024-12-07 08:14:17.591137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.591181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.599850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e27f0 00:23:06.480 [2024-12-07 08:14:17.601196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.601267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.609889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f57b0 00:23:06.480 [2024-12-07 08:14:17.610713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.610774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.619741] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f35f0 00:23:06.480 [2024-12-07 08:14:17.620475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.620513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.629765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f4b08 00:23:06.480 [2024-12-07 08:14:17.630667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.630713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.639716] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e6738 00:23:06.480 [2024-12-07 08:14:17.640536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.640581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.649471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f3e60 00:23:06.480 [2024-12-07 08:14:17.650258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.650300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.659312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190eb328 00:23:06.480 [2024-12-07 08:14:17.660011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.660042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.669082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f0788 00:23:06.480 [2024-12-07 08:14:17.669913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.669959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.679547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ef6a8 00:23:06.480 [2024-12-07 08:14:17.680111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.680142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.687915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f96f8 00:23:06.480 [2024-12-07 08:14:17.688179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.688225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.699842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e4140 00:23:06.480 [2024-12-07 08:14:17.700823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.700900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.709296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f7100 00:23:06.480 [2024-12-07 08:14:17.710755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.710800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.719525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e3060 00:23:06.480 [2024-12-07 08:14:17.720192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.720229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.728801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ff3c8 00:23:06.480 [2024-12-07 08:14:17.729925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.729971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.738694] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e3d08 00:23:06.480 [2024-12-07 08:14:17.738914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.480 [2024-12-07 08:14:17.738937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:06.480 [2024-12-07 08:14:17.748367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e0630 00:23:06.481 [2024-12-07 08:14:17.748620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.481 [2024-12-07 08:14:17.748643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:06.740 [2024-12-07 08:14:17.759437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ebfd0 00:23:06.740 [2024-12-07 08:14:17.760300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.740 [2024-12-07 08:14:17.760344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:06.740 [2024-12-07 08:14:17.769310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190eaab8 00:23:06.740 [2024-12-07 08:14:17.770682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.740 [2024-12-07 08:14:17.770727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:06.740 [2024-12-07 08:14:17.779278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e0a68 00:23:06.740 [2024-12-07 08:14:17.780208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.740 [2024-12-07 08:14:17.780277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:06.740 [2024-12-07 08:14:17.789083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e3498 00:23:06.740 [2024-12-07 08:14:17.789965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.740 [2024-12-07 08:14:17.790013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:06.740 [2024-12-07 08:14:17.797973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190feb58 00:23:06.740 [2024-12-07 08:14:17.798090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.740 [2024-12-07 08:14:17.798124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:06.740 [2024-12-07 08:14:17.809609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e3d08 00:23:06.740 [2024-12-07 08:14:17.811343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.740 [2024-12-07 08:14:17.811376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:06.740 [2024-12-07 08:14:17.820060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ec840 00:23:06.740 [2024-12-07 08:14:17.821206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.740 [2024-12-07 08:14:17.821407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:06.740 [2024-12-07 08:14:17.830169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ea248 00:23:06.740 [2024-12-07 08:14:17.831035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.740 [2024-12-07 08:14:17.831070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:06.740 [2024-12-07 08:14:17.840056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e3060 00:23:06.740 [2024-12-07 08:14:17.840989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.740 [2024-12-07 08:14:17.841015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:06.740 [2024-12-07 08:14:17.850045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e12d8 00:23:06.740 [2024-12-07 08:14:17.850804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.740 [2024-12-07 08:14:17.850833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:06.740 [2024-12-07 08:14:17.859890] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e73e0 00:23:06.740 [2024-12-07 08:14:17.860547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.741 [2024-12-07 08:14:17.860583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:06.741 [2024-12-07 08:14:17.869769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ef6a8 00:23:06.741 [2024-12-07 08:14:17.870666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.741 [2024-12-07 08:14:17.870693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:06.741 [2024-12-07 08:14:17.879746] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f8618 00:23:06.741 [2024-12-07 08:14:17.880347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.741 [2024-12-07 08:14:17.880382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:06.741 [2024-12-07 08:14:17.889513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ef6a8 00:23:06.741 [2024-12-07 08:14:17.890161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.741 [2024-12-07 08:14:17.890208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:06.741 [2024-12-07 08:14:17.899267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f7100 00:23:06.741 [2024-12-07 08:14:17.899822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.741 [2024-12-07 08:14:17.899858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:06.741 [2024-12-07 08:14:17.908847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e88f8 00:23:06.741 [2024-12-07 08:14:17.910083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.741 [2024-12-07 08:14:17.910115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:06.741 [2024-12-07 08:14:17.918882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e3060 00:23:06.741 [2024-12-07 08:14:17.919656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.741 [2024-12-07 08:14:17.919683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:06.741 [2024-12-07 08:14:17.929123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e0ea0 00:23:06.741 [2024-12-07 08:14:17.929830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.741 [2024-12-07 08:14:17.929861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:06.741 [2024-12-07 08:14:17.938859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190fef90 00:23:06.741 [2024-12-07 08:14:17.940189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.741 [2024-12-07 08:14:17.940401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:06.741 [2024-12-07 08:14:17.948778] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190edd58 00:23:06.741 [2024-12-07 08:14:17.949767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.741 [2024-12-07 08:14:17.949818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.741 [2024-12-07 08:14:17.958733] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f2510 00:23:06.741 [2024-12-07 08:14:17.959993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.741 [2024-12-07 08:14:17.960026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.741 [2024-12-07 08:14:17.968900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e95a0 00:23:06.741 [2024-12-07 08:14:17.970337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.741 [2024-12-07 08:14:17.970370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:06.741 [2024-12-07 08:14:17.978535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190eff18 00:23:06.741 [2024-12-07 08:14:17.979485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.741 [2024-12-07 08:14:17.979518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:06.741 [2024-12-07 08:14:17.989648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ee5c8 00:23:06.741 [2024-12-07 08:14:17.990588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.741 [2024-12-07 08:14:17.990623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:06.741 [2024-12-07 08:14:17.999290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190fdeb0 00:23:06.741 [2024-12-07 08:14:18.000323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.741 [2024-12-07 08:14:18.000380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:06.741 [2024-12-07 08:14:18.008976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190df550 00:23:06.741 [2024-12-07 08:14:18.010902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:06.741 [2024-12-07 08:14:18.010939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:07.001 [2024-12-07 08:14:18.021289] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f6020 00:23:07.001 [2024-12-07 08:14:18.022057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.001 [2024-12-07 08:14:18.022300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:07.001 [2024-12-07 08:14:18.030750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f92c0 00:23:07.001 [2024-12-07 08:14:18.032106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.001 [2024-12-07 08:14:18.032139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:07.001 [2024-12-07 08:14:18.040629] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190de470 00:23:07.001 [2024-12-07 08:14:18.041135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.001 [2024-12-07 08:14:18.041170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:07.001 [2024-12-07 08:14:18.050584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f6cc8 00:23:07.001 [2024-12-07 08:14:18.051529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.001 [2024-12-07 08:14:18.051561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:07.001 [2024-12-07 08:14:18.061072] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e5a90 00:23:07.001 [2024-12-07 08:14:18.062176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.001 [2024-12-07 08:14:18.062388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:07.001 [2024-12-07 08:14:18.070100] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f0ff8 00:23:07.001 [2024-12-07 08:14:18.071220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.001 [2024-12-07 08:14:18.071273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:07.001 [2024-12-07 08:14:18.080242] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f2d80 00:23:07.001 [2024-12-07 08:14:18.080741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.001 [2024-12-07 08:14:18.080777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:07.001 [2024-12-07 08:14:18.090944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f57b0 00:23:07.001 [2024-12-07 08:14:18.091609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.001 [2024-12-07 08:14:18.091644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:07.001 [2024-12-07 08:14:18.100868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ea680 00:23:07.001 [2024-12-07 08:14:18.101822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.001 [2024-12-07 08:14:18.102052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:07.001 [2024-12-07 08:14:18.111176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e7818 00:23:07.001 [2024-12-07 08:14:18.111909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.001 [2024-12-07 08:14:18.111974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:07.001 [2024-12-07 08:14:18.120721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f6cc8 00:23:07.001 [2024-12-07 08:14:18.120945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.001 [2024-12-07 08:14:18.120966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:07.001 [2024-12-07 08:14:18.132260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190de8a8 00:23:07.001 [2024-12-07 08:14:18.133390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.001 [2024-12-07 08:14:18.133423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:07.001 [2024-12-07 08:14:18.143675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e88f8 00:23:07.001 [2024-12-07 08:14:18.143854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.001 [2024-12-07 08:14:18.143879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:07.001 [2024-12-07 08:14:18.155012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f6458 00:23:07.001 [2024-12-07 08:14:18.155895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.001 [2024-12-07 08:14:18.156060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:07.001 [2024-12-07 08:14:18.166426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e4578 00:23:07.001 [2024-12-07 08:14:18.167890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.001 [2024-12-07 08:14:18.167924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:07.001 [2024-12-07 08:14:18.176844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f92c0 00:23:07.001 [2024-12-07 08:14:18.177394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.001 [2024-12-07 08:14:18.177474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:07.001 [2024-12-07 08:14:18.187058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190eb328 00:23:07.001 [2024-12-07 08:14:18.188530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.001 [2024-12-07 08:14:18.188560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:07.001 [2024-12-07 08:14:18.197346] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e4140 00:23:07.001 [2024-12-07 08:14:18.197896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.001 [2024-12-07 08:14:18.197932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:07.002 [2024-12-07 08:14:18.207582] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190fc560 00:23:07.002 [2024-12-07 08:14:18.208095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.002 [2024-12-07 08:14:18.208130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:07.002 [2024-12-07 08:14:18.217821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f2d80 00:23:07.002 [2024-12-07 08:14:18.218577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.002 [2024-12-07 08:14:18.218774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:07.002 [2024-12-07 08:14:18.228130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e9168 00:23:07.002 [2024-12-07 08:14:18.228865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.002 [2024-12-07 08:14:18.228893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:07.002 [2024-12-07 08:14:18.238166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e84c0 00:23:07.002 [2024-12-07 08:14:18.238708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.002 [2024-12-07 08:14:18.238739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:07.002 [2024-12-07 08:14:18.248579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ea248 00:23:07.002 [2024-12-07 08:14:18.249806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.002 [2024-12-07 08:14:18.249993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:07.002 [2024-12-07 08:14:18.258726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190de470 00:23:07.002 [2024-12-07 08:14:18.260233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.002 [2024-12-07 08:14:18.260473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:07.002 [2024-12-07 08:14:18.268874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ea680 00:23:07.002 [2024-12-07 08:14:18.269482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.002 [2024-12-07 08:14:18.269703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:07.261 [2024-12-07 08:14:18.278942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ed4e8 00:23:07.262 [2024-12-07 08:14:18.279246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.279456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.291521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ef6a8 00:23:07.262 [2024-12-07 08:14:18.292346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.292552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.301633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ee190 00:23:07.262 [2024-12-07 08:14:18.302511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.302718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.312114] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190fdeb0 00:23:07.262 [2024-12-07 08:14:18.313398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.313610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.323192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f4b08 00:23:07.262 [2024-12-07 08:14:18.324238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.324452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.332483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f31b8 00:23:07.262 [2024-12-07 08:14:18.332960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.333116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.344149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190fef90 00:23:07.262 [2024-12-07 08:14:18.344836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.345047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.355123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190edd58 00:23:07.262 [2024-12-07 08:14:18.356245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.356459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.365464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190edd58 00:23:07.262 [2024-12-07 08:14:18.366972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.367173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.376622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190edd58 00:23:07.262 [2024-12-07 08:14:18.378470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.378671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.386421] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e2c28 00:23:07.262 [2024-12-07 08:14:18.387775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.387992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.396785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ed920 00:23:07.262 [2024-12-07 08:14:18.397752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.397978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.408238] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f7970 00:23:07.262 [2024-12-07 08:14:18.409199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.409439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.418659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190fc560 00:23:07.262 [2024-12-07 08:14:18.420333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.420528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.428117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e01f8 00:23:07.262 [2024-12-07 08:14:18.429289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.429345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.438089] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e4578 00:23:07.262 [2024-12-07 08:14:18.438918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.438962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.448000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190fc128 00:23:07.262 [2024-12-07 08:14:18.448781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.448817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.457773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e5ec8 00:23:07.262 [2024-12-07 08:14:18.458539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.458575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.467544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190eaab8 00:23:07.262 [2024-12-07 08:14:18.468751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.468779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.477712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190fe2e8 00:23:07.262 [2024-12-07 08:14:18.478601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.478651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.488237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190eff18 00:23:07.262 [2024-12-07 08:14:18.488730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.488749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.496913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e5658 00:23:07.262 [2024-12-07 08:14:18.497163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.497187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.509032] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ec408 00:23:07.262 [2024-12-07 08:14:18.510111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.510304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.518671] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190eee38 00:23:07.262 [2024-12-07 08:14:18.520047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.520081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:07.262 [2024-12-07 08:14:18.528812] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ed4e8 00:23:07.262 [2024-12-07 08:14:18.529769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.262 [2024-12-07 08:14:18.529797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:07.522 [2024-12-07 08:14:18.540984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f92c0 00:23:07.522 [2024-12-07 08:14:18.541936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.522 [2024-12-07 08:14:18.541986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:07.522 [2024-12-07 08:14:18.551034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ef6a8 00:23:07.522 [2024-12-07 08:14:18.551879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.522 [2024-12-07 08:14:18.551942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:07.522 [2024-12-07 08:14:18.559788] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ebb98 00:23:07.522 [2024-12-07 08:14:18.560732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.522 [2024-12-07 08:14:18.560945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:07.522 [2024-12-07 08:14:18.569576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ebb98 00:23:07.522 [2024-12-07 08:14:18.571067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.522 [2024-12-07 08:14:18.571101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:07.522 [2024-12-07 08:14:18.581544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ea680 00:23:07.522 [2024-12-07 08:14:18.582643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.522 [2024-12-07 08:14:18.582674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:07.522 [2024-12-07 08:14:18.590319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e6300 00:23:07.522 [2024-12-07 08:14:18.591437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.522 [2024-12-07 08:14:18.591465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:07.522 [2024-12-07 08:14:18.600423] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e95a0 00:23:07.522 [2024-12-07 08:14:18.601860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.522 [2024-12-07 08:14:18.601894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:07.522 [2024-12-07 08:14:18.610405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e6300 00:23:07.522 [2024-12-07 08:14:18.611519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.522 [2024-12-07 08:14:18.611550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:07.522 [2024-12-07 08:14:18.620218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f0788 00:23:07.522 [2024-12-07 08:14:18.621207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.522 [2024-12-07 08:14:18.621452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:07.522 [2024-12-07 08:14:18.630287] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f0788 00:23:07.522 [2024-12-07 08:14:18.631491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.522 [2024-12-07 08:14:18.631524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:07.522 [2024-12-07 08:14:18.640090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f0788 00:23:07.522 [2024-12-07 08:14:18.641259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.522 [2024-12-07 08:14:18.641311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:07.522 [2024-12-07 08:14:18.650168] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f0788 00:23:07.522 [2024-12-07 08:14:18.651286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.522 [2024-12-07 08:14:18.651341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:07.522 [2024-12-07 08:14:18.660058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f0788 00:23:07.522 [2024-12-07 08:14:18.661128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.522 [2024-12-07 08:14:18.661321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:07.522 [2024-12-07 08:14:18.670207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f0788 00:23:07.522 [2024-12-07 08:14:18.671337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.522 [2024-12-07 08:14:18.671382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:07.522 [2024-12-07 08:14:18.680201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f0788 00:23:07.522 [2024-12-07 08:14:18.681445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.522 [2024-12-07 08:14:18.681477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:07.522 [2024-12-07 08:14:18.690272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f3e60 00:23:07.522 [2024-12-07 08:14:18.691369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.523 [2024-12-07 08:14:18.691397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:07.523 [2024-12-07 08:14:18.700341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e6738 00:23:07.523 [2024-12-07 08:14:18.700963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.523 [2024-12-07 08:14:18.700999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:07.523 [2024-12-07 08:14:18.710174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e6b70 00:23:07.523 [2024-12-07 08:14:18.711335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.523 [2024-12-07 08:14:18.711388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:07.523 [2024-12-07 08:14:18.720485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f5378 00:23:07.523 [2024-12-07 08:14:18.721168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.523 [2024-12-07 08:14:18.721213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:07.523 [2024-12-07 08:14:18.730384] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f5378 00:23:07.523 [2024-12-07 08:14:18.731003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.523 [2024-12-07 08:14:18.731038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:07.523 [2024-12-07 08:14:18.740435] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e6b70 00:23:07.523 [2024-12-07 08:14:18.741086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.523 [2024-12-07 08:14:18.741130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:07.523 [2024-12-07 08:14:18.750231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e6738 00:23:07.523 [2024-12-07 08:14:18.751019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.523 [2024-12-07 08:14:18.751062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:07.523 [2024-12-07 08:14:18.760842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f3e60 00:23:07.523 [2024-12-07 08:14:18.761284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.523 [2024-12-07 08:14:18.761309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:07.523 [2024-12-07 08:14:18.770779] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e4de8 00:23:07.523 [2024-12-07 08:14:18.771460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.523 [2024-12-07 08:14:18.771633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:07.523 [2024-12-07 08:14:18.780650] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f20d8 00:23:07.523 [2024-12-07 08:14:18.781242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.523 [2024-12-07 08:14:18.781288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:07.523 [2024-12-07 08:14:18.790632] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f7da8 00:23:07.523 [2024-12-07 08:14:18.791204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.523 [2024-12-07 08:14:18.791248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:07.782 [2024-12-07 08:14:18.801402] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ebb98 00:23:07.782 [2024-12-07 08:14:18.802072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.782 [2024-12-07 08:14:18.802258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:07.782 [2024-12-07 08:14:18.811307] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e9168 00:23:07.782 [2024-12-07 08:14:18.812025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.782 [2024-12-07 08:14:18.812062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:07.782 [2024-12-07 08:14:18.821099] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190fac10 00:23:07.782 [2024-12-07 08:14:18.822441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.782 [2024-12-07 08:14:18.822469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:07.782 [2024-12-07 08:14:18.831125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ef6a8 00:23:07.782 [2024-12-07 08:14:18.831916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.782 [2024-12-07 08:14:18.831959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:07.782 [2024-12-07 08:14:18.841214] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190edd58 00:23:07.782 [2024-12-07 08:14:18.842000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.782 [2024-12-07 08:14:18.842060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:07.782 [2024-12-07 08:14:18.851480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f92c0 00:23:07.782 [2024-12-07 08:14:18.852749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.782 [2024-12-07 08:14:18.852784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:07.782 [2024-12-07 08:14:18.861639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f4b08 00:23:07.782 [2024-12-07 08:14:18.862425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.782 [2024-12-07 08:14:18.862460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:07.782 [2024-12-07 08:14:18.873007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e4578 00:23:07.782 [2024-12-07 08:14:18.873584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.782 [2024-12-07 08:14:18.873614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:07.782 [2024-12-07 08:14:18.882049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f96f8 00:23:07.782 [2024-12-07 08:14:18.883609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.782 [2024-12-07 08:14:18.883669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:07.782 [2024-12-07 08:14:18.892870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e12d8 00:23:07.782 [2024-12-07 08:14:18.893472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.782 [2024-12-07 08:14:18.893556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:07.782 [2024-12-07 08:14:18.902912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f5be8 00:23:07.782 [2024-12-07 08:14:18.903560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.782 [2024-12-07 08:14:18.903595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:07.782 [2024-12-07 08:14:18.912670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f20d8 00:23:07.782 [2024-12-07 08:14:18.913349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.782 [2024-12-07 08:14:18.913385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:07.782 [2024-12-07 08:14:18.922496] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e12d8 00:23:07.783 [2024-12-07 08:14:18.923116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.783 [2024-12-07 08:14:18.923151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:07.783 [2024-12-07 08:14:18.932273] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f0788 00:23:07.783 [2024-12-07 08:14:18.933049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.783 [2024-12-07 08:14:18.933113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:07.783 [2024-12-07 08:14:18.942812] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e6fa8 00:23:07.783 [2024-12-07 08:14:18.943251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.783 [2024-12-07 08:14:18.943323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:07.783 [2024-12-07 08:14:18.951210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190fa3a0 00:23:07.783 [2024-12-07 08:14:18.951404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.783 [2024-12-07 08:14:18.951423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:07.783 [2024-12-07 08:14:18.962316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190dfdc0 00:23:07.783 [2024-12-07 08:14:18.962982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.783 [2024-12-07 08:14:18.963015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:07.783 [2024-12-07 08:14:18.971696] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e2c28 00:23:07.783 [2024-12-07 08:14:18.972961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.783 [2024-12-07 08:14:18.972995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:07.783 [2024-12-07 08:14:18.982471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ec840 00:23:07.783 [2024-12-07 08:14:18.982847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.783 [2024-12-07 08:14:18.982870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:07.783 [2024-12-07 08:14:18.994367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ef6a8 00:23:07.783 [2024-12-07 08:14:18.995438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.783 [2024-12-07 08:14:18.995485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:07.783 [2024-12-07 08:14:19.001753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f4b08 00:23:07.783 [2024-12-07 08:14:19.002100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.783 [2024-12-07 08:14:19.002122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:07.783 [2024-12-07 08:14:19.013973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e8088 00:23:07.783 [2024-12-07 08:14:19.014870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.783 [2024-12-07 08:14:19.014903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:07.783 [2024-12-07 08:14:19.022858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f5be8 00:23:07.783 [2024-12-07 08:14:19.023750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.783 [2024-12-07 08:14:19.023783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:07.783 [2024-12-07 08:14:19.033067] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190fb048 00:23:07.783 [2024-12-07 08:14:19.034433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.783 [2024-12-07 08:14:19.034466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:07.783 [2024-12-07 08:14:19.042953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f31b8 00:23:07.783 [2024-12-07 08:14:19.043271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.783 [2024-12-07 08:14:19.043297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:07.783 [2024-12-07 08:14:19.055588] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190fcdd0 00:23:08.041 [2024-12-07 08:14:19.057017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.041 [2024-12-07 08:14:19.057050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.041 [2024-12-07 08:14:19.063134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e4578 00:23:08.041 [2024-12-07 08:14:19.064199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.041 [2024-12-07 08:14:19.064255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:08.041 [2024-12-07 08:14:19.073121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f4b08 00:23:08.041 [2024-12-07 08:14:19.073506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.041 [2024-12-07 08:14:19.073528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:08.041 [2024-12-07 08:14:19.083656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ee190 00:23:08.041 [2024-12-07 08:14:19.084233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.041 [2024-12-07 08:14:19.084351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:08.041 [2024-12-07 08:14:19.093944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f46d0 00:23:08.041 [2024-12-07 08:14:19.095226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.041 [2024-12-07 08:14:19.095461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.041 [2024-12-07 08:14:19.104861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f5be8 00:23:08.041 [2024-12-07 08:14:19.106517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.041 [2024-12-07 08:14:19.106737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:08.041 [2024-12-07 08:14:19.115356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f6020 00:23:08.041 [2024-12-07 08:14:19.116239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.041 [2024-12-07 08:14:19.116492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:08.041 [2024-12-07 08:14:19.126237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f92c0 00:23:08.041 [2024-12-07 08:14:19.127191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.041 [2024-12-07 08:14:19.127393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:08.041 [2024-12-07 08:14:19.136982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f6020 00:23:08.041 [2024-12-07 08:14:19.137864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.041 [2024-12-07 08:14:19.138124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:08.041 [2024-12-07 08:14:19.148124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e0ea0 00:23:08.041 [2024-12-07 08:14:19.149485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.041 [2024-12-07 08:14:19.149729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:08.041 [2024-12-07 08:14:19.159824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f6458 00:23:08.041 [2024-12-07 08:14:19.160767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.041 [2024-12-07 08:14:19.160967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:08.041 [2024-12-07 08:14:19.170397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f31b8 00:23:08.041 [2024-12-07 08:14:19.171662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.041 [2024-12-07 08:14:19.171861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.041 [2024-12-07 08:14:19.181052] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f1430 00:23:08.041 [2024-12-07 08:14:19.181707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.042 [2024-12-07 08:14:19.181860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:08.042 [2024-12-07 08:14:19.193440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ed4e8 00:23:08.042 [2024-12-07 08:14:19.194721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.042 [2024-12-07 08:14:19.194920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:08.042 [2024-12-07 08:14:19.201087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f4f40 00:23:08.042 [2024-12-07 08:14:19.201429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.042 [2024-12-07 08:14:19.201655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:08.042 [2024-12-07 08:14:19.213343] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e5220 00:23:08.042 [2024-12-07 08:14:19.214406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.042 [2024-12-07 08:14:19.214606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:08.042 [2024-12-07 08:14:19.223138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e3498 00:23:08.042 [2024-12-07 08:14:19.224434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.042 [2024-12-07 08:14:19.224464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:08.042 [2024-12-07 08:14:19.233242] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ec840 00:23:08.042 [2024-12-07 08:14:19.233779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.042 [2024-12-07 08:14:19.233816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:08.042 [2024-12-07 08:14:19.245377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e6fa8 00:23:08.042 [2024-12-07 08:14:19.246623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.042 [2024-12-07 08:14:19.246654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:08.042 [2024-12-07 08:14:19.252805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190fd208 00:23:08.042 [2024-12-07 08:14:19.253213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.042 [2024-12-07 08:14:19.253249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:08.042 [2024-12-07 08:14:19.264556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e27f0 00:23:08.042 [2024-12-07 08:14:19.265318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.042 [2024-12-07 08:14:19.265353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:08.042 [2024-12-07 08:14:19.274054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f20d8 00:23:08.042 [2024-12-07 08:14:19.275304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.042 [2024-12-07 08:14:19.275363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:08.042 [2024-12-07 08:14:19.284167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190fac10 00:23:08.042 [2024-12-07 08:14:19.284719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.042 [2024-12-07 08:14:19.284749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:08.042 [2024-12-07 08:14:19.295881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e3d08 00:23:08.042 [2024-12-07 08:14:19.296769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.042 [2024-12-07 08:14:19.296794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:08.042 [2024-12-07 08:14:19.306158] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f1ca0 00:23:08.042 [2024-12-07 08:14:19.307618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.042 [2024-12-07 08:14:19.307653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:08.301 [2024-12-07 08:14:19.318098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f4298 00:23:08.301 [2024-12-07 08:14:19.319161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.301 [2024-12-07 08:14:19.319277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:08.301 [2024-12-07 08:14:19.330630] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190fb048 00:23:08.301 [2024-12-07 08:14:19.331618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.301 [2024-12-07 08:14:19.331690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:08.301 [2024-12-07 08:14:19.337800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e4de8 00:23:08.301 [2024-12-07 08:14:19.338253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.301 [2024-12-07 08:14:19.338295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:08.301 [2024-12-07 08:14:19.350461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e95a0 00:23:08.301 [2024-12-07 08:14:19.351406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.301 [2024-12-07 08:14:19.351694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:08.301 [2024-12-07 08:14:19.360390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e84c0 00:23:08.301 [2024-12-07 08:14:19.361460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.301 [2024-12-07 08:14:19.361767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:08.301 [2024-12-07 08:14:19.371605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190eaef0 00:23:08.301 [2024-12-07 08:14:19.372808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.301 [2024-12-07 08:14:19.373011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:08.301 [2024-12-07 08:14:19.382688] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190ef6a8 00:23:08.301 [2024-12-07 08:14:19.383881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.301 [2024-12-07 08:14:19.384162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:08.301 [2024-12-07 08:14:19.393121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e5220 00:23:08.301 [2024-12-07 08:14:19.394500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.301 [2024-12-07 08:14:19.394762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:08.301 [2024-12-07 08:14:19.406161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e0ea0 00:23:08.301 [2024-12-07 08:14:19.407412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.301 [2024-12-07 08:14:19.407675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:08.301 [2024-12-07 08:14:19.415742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f8a50 00:23:08.301 [2024-12-07 08:14:19.416824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.301 [2024-12-07 08:14:19.417028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:08.301 [2024-12-07 08:14:19.426478] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f8a50 00:23:08.301 [2024-12-07 08:14:19.427487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.301 [2024-12-07 08:14:19.427519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:08.301 [2024-12-07 08:14:19.436796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e4de8 00:23:08.301 [2024-12-07 08:14:19.437225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.301 [2024-12-07 08:14:19.437286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:08.301 [2024-12-07 08:14:19.449456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e27f0 00:23:08.301 [2024-12-07 08:14:19.450703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.301 [2024-12-07 08:14:19.450797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:08.301 [2024-12-07 08:14:19.459078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190f5be8 00:23:08.301 [2024-12-07 08:14:19.460145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.301 [2024-12-07 08:14:19.460288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:08.301 [2024-12-07 08:14:19.469121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b0e0) with pdu=0x2000190e12d8 00:23:08.301 [2024-12-07 08:14:19.469517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:08.301 [2024-12-07 08:14:19.469606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:08.301 00:23:08.301 Latency(us) 00:23:08.301 [2024-12-07T08:14:19.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.301 [2024-12-07T08:14:19.577Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:08.301 nvme0n1 : 2.00 24836.40 97.02 0.00 0.00 5147.68 1906.50 12988.04 00:23:08.301 [2024-12-07T08:14:19.577Z] =================================================================================================================== 00:23:08.301 [2024-12-07T08:14:19.577Z] Total : 24836.40 97.02 0.00 0.00 5147.68 1906.50 12988.04 00:23:08.301 0 00:23:08.301 08:14:19 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:08.301 08:14:19 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:08.301 08:14:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:08.301 08:14:19 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:08.301 | .driver_specific 00:23:08.301 | .nvme_error 00:23:08.301 | .status_code 00:23:08.301 | .command_transient_transport_error' 00:23:08.559 08:14:19 -- host/digest.sh@71 -- # (( 195 > 0 )) 00:23:08.559 08:14:19 -- host/digest.sh@73 -- # killprocess 97980 00:23:08.559 08:14:19 -- common/autotest_common.sh@936 -- # '[' -z 97980 ']' 00:23:08.559 08:14:19 -- common/autotest_common.sh@940 -- # kill -0 97980 00:23:08.559 08:14:19 -- common/autotest_common.sh@941 -- # uname 00:23:08.559 08:14:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:08.559 08:14:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97980 00:23:08.559 08:14:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:08.559 killing process with pid 97980 00:23:08.559 08:14:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:08.559 08:14:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97980' 00:23:08.559 08:14:19 -- common/autotest_common.sh@955 -- # kill 97980 00:23:08.559 Received shutdown signal, test time was about 2.000000 seconds 00:23:08.559 00:23:08.559 Latency(us) 00:23:08.559 [2024-12-07T08:14:19.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.559 [2024-12-07T08:14:19.835Z] =================================================================================================================== 00:23:08.559 [2024-12-07T08:14:19.835Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:08.559 08:14:19 -- common/autotest_common.sh@960 -- # wait 97980 00:23:08.831 08:14:20 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:23:08.831 08:14:20 -- host/digest.sh@54 -- # local rw bs qd 00:23:08.831 08:14:20 -- host/digest.sh@56 -- # rw=randwrite 00:23:08.831 08:14:20 -- host/digest.sh@56 -- # bs=131072 00:23:08.831 08:14:20 -- host/digest.sh@56 -- # qd=16 00:23:08.831 08:14:20 -- host/digest.sh@58 -- # bperfpid=98069 00:23:08.831 08:14:20 -- host/digest.sh@60 -- # waitforlisten 98069 /var/tmp/bperf.sock 00:23:08.831 08:14:20 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:08.831 08:14:20 -- common/autotest_common.sh@829 -- # '[' -z 98069 ']' 00:23:08.831 08:14:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:08.831 08:14:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:08.831 08:14:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:08.831 08:14:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.831 08:14:20 -- common/autotest_common.sh@10 -- # set +x 00:23:08.831 [2024-12-07 08:14:20.067764] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:08.831 [2024-12-07 08:14:20.067881] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98069 ] 00:23:08.831 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:08.831 Zero copy mechanism will not be used. 00:23:09.090 [2024-12-07 08:14:20.202599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.090 [2024-12-07 08:14:20.277017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.025 08:14:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:10.025 08:14:21 -- common/autotest_common.sh@862 -- # return 0 00:23:10.025 08:14:21 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:10.025 08:14:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:10.025 08:14:21 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:10.025 08:14:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.025 08:14:21 -- common/autotest_common.sh@10 -- # set +x 00:23:10.025 08:14:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.025 08:14:21 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:10.025 08:14:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:10.284 nvme0n1 00:23:10.284 08:14:21 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:10.284 08:14:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.284 08:14:21 -- common/autotest_common.sh@10 -- # set +x 00:23:10.544 08:14:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.544 08:14:21 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:10.544 08:14:21 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:10.544 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:10.544 Zero copy mechanism will not be used. 00:23:10.544 Running I/O for 2 seconds... 00:23:10.544 [2024-12-07 08:14:21.669544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.669881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-12-07 08:14:21.669911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.544 [2024-12-07 08:14:21.673909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.674028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-12-07 08:14:21.674051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.544 [2024-12-07 08:14:21.677955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.678117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-12-07 08:14:21.678138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.544 [2024-12-07 08:14:21.682155] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.682284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-12-07 08:14:21.682320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.544 [2024-12-07 08:14:21.686195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.686336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-12-07 08:14:21.686356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.544 [2024-12-07 08:14:21.690122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.690243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-12-07 08:14:21.690266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.544 [2024-12-07 08:14:21.694314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.694458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-12-07 08:14:21.694479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.544 [2024-12-07 08:14:21.698384] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.698615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-12-07 08:14:21.698656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.544 [2024-12-07 08:14:21.702508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.702751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-12-07 08:14:21.702788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.544 [2024-12-07 08:14:21.706729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.706891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-12-07 08:14:21.706912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.544 [2024-12-07 08:14:21.710806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.710922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-12-07 08:14:21.710943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.544 [2024-12-07 08:14:21.714940] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.715064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-12-07 08:14:21.715084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.544 [2024-12-07 08:14:21.718957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.719070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-12-07 08:14:21.719106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.544 [2024-12-07 08:14:21.723014] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.723155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-12-07 08:14:21.723175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.544 [2024-12-07 08:14:21.727176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.727335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-12-07 08:14:21.727356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.544 [2024-12-07 08:14:21.731378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.731601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-12-07 08:14:21.731621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.544 [2024-12-07 08:14:21.735520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.735762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-12-07 08:14:21.735798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.544 [2024-12-07 08:14:21.739579] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.739722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-12-07 08:14:21.739742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.544 [2024-12-07 08:14:21.743553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.743684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-12-07 08:14:21.743704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.544 [2024-12-07 08:14:21.747675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.747790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-12-07 08:14:21.747811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.544 [2024-12-07 08:14:21.751707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.751830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.544 [2024-12-07 08:14:21.751850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.544 [2024-12-07 08:14:21.755798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.544 [2024-12-07 08:14:21.755947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-12-07 08:14:21.755967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.545 [2024-12-07 08:14:21.759897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.545 [2024-12-07 08:14:21.760041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-12-07 08:14:21.760061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.545 [2024-12-07 08:14:21.764063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.545 [2024-12-07 08:14:21.764296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-12-07 08:14:21.764317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.545 [2024-12-07 08:14:21.768132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.545 [2024-12-07 08:14:21.768376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-12-07 08:14:21.768412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.545 [2024-12-07 08:14:21.772168] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.545 [2024-12-07 08:14:21.772330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-12-07 08:14:21.772350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.545 [2024-12-07 08:14:21.776220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.545 [2024-12-07 08:14:21.776336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-12-07 08:14:21.776356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.545 [2024-12-07 08:14:21.780289] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.545 [2024-12-07 08:14:21.780418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-12-07 08:14:21.780438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.545 [2024-12-07 08:14:21.784453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.545 [2024-12-07 08:14:21.784594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-12-07 08:14:21.784614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.545 [2024-12-07 08:14:21.788587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.545 [2024-12-07 08:14:21.788729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-12-07 08:14:21.788749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.545 [2024-12-07 08:14:21.792906] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.545 [2024-12-07 08:14:21.793055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-12-07 08:14:21.793076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.545 [2024-12-07 08:14:21.797300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.545 [2024-12-07 08:14:21.797560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-12-07 08:14:21.797593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.545 [2024-12-07 08:14:21.801796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.545 [2024-12-07 08:14:21.802022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-12-07 08:14:21.802046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.545 [2024-12-07 08:14:21.806160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.545 [2024-12-07 08:14:21.806351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-12-07 08:14:21.806374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.545 [2024-12-07 08:14:21.810768] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.545 [2024-12-07 08:14:21.810883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-12-07 08:14:21.810904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.545 [2024-12-07 08:14:21.815387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.545 [2024-12-07 08:14:21.815502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.545 [2024-12-07 08:14:21.815526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.805 [2024-12-07 08:14:21.819859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.805 [2024-12-07 08:14:21.819992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.805 [2024-12-07 08:14:21.820015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.805 [2024-12-07 08:14:21.824301] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.805 [2024-12-07 08:14:21.824452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.805 [2024-12-07 08:14:21.824475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.805 [2024-12-07 08:14:21.828643] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.828807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.828828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.832883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.833110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.833131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.837157] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.837378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.837399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.841233] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.841369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.841390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.845266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.845380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.845401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.849260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.849374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.849394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.853588] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.853756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.853778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.857633] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.857796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.857818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.861701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.861859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.861881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.865821] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.866079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.866157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.870100] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.870369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.870406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.874261] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.874418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.874438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.878376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.878503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.878523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.882461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.882563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.882584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.886639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.886758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.886779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.890721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.890879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.890900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.894834] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.894990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.895011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.899215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.899551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.899581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.903475] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.903678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.903699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.907686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.907832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.907853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.911755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.911891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.911912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.916198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.916362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.916399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.920330] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.920437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.920458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.924436] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.924568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.924589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.928548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.928712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.928733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.933009] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.933242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.806 [2024-12-07 08:14:21.933277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.806 [2024-12-07 08:14:21.937063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.806 [2024-12-07 08:14:21.937347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:21.937402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:21.941152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:21.941303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:21.941323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:21.945285] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:21.945405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:21.945425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:21.949463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:21.949573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:21.949593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:21.953427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:21.953540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:21.953560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:21.957459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:21.957589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:21.957609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:21.961523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:21.961651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:21.961682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:21.965617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:21.965879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:21.965902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:21.969703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:21.969913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:21.969935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:21.973808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:21.973940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:21.973962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:21.978220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:21.978374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:21.978395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:21.982537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:21.982683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:21.982703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:21.986744] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:21.986859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:21.986880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:21.990927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:21.991066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:21.991085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:21.994995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:21.995139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:21.995159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:21.999142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:21.999377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:21.999397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:22.003146] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:22.003388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:22.003408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:22.007260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:22.007406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:22.007426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:22.011333] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:22.011461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:22.011481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:22.015308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:22.015428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:22.015448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:22.019400] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:22.019512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:22.019532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:22.023360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:22.023496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:22.023516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:22.027401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:22.027531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:22.027550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:22.031454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:22.031680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:22.031733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:22.035472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:22.035742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:22.035763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:22.039531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:22.039638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:22.039657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:22.043532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:22.043662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.807 [2024-12-07 08:14:22.043681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.807 [2024-12-07 08:14:22.047562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.807 [2024-12-07 08:14:22.047692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-12-07 08:14:22.047712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.808 [2024-12-07 08:14:22.051536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.808 [2024-12-07 08:14:22.051650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-12-07 08:14:22.051670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.808 [2024-12-07 08:14:22.055538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.808 [2024-12-07 08:14:22.055687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-12-07 08:14:22.055707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.808 [2024-12-07 08:14:22.059606] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.808 [2024-12-07 08:14:22.059751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-12-07 08:14:22.059770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:10.808 [2024-12-07 08:14:22.063727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.808 [2024-12-07 08:14:22.063947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-12-07 08:14:22.063967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.808 [2024-12-07 08:14:22.067801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.808 [2024-12-07 08:14:22.067991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-12-07 08:14:22.068010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:10.808 [2024-12-07 08:14:22.071829] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.808 [2024-12-07 08:14:22.071942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-12-07 08:14:22.071963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:10.808 [2024-12-07 08:14:22.076014] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:10.808 [2024-12-07 08:14:22.076138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.808 [2024-12-07 08:14:22.076160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.069 [2024-12-07 08:14:22.080546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.069 [2024-12-07 08:14:22.080660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-12-07 08:14:22.080683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.069 [2024-12-07 08:14:22.084699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.069 [2024-12-07 08:14:22.084812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-12-07 08:14:22.084849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.069 [2024-12-07 08:14:22.088807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.069 [2024-12-07 08:14:22.088948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-12-07 08:14:22.088969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.069 [2024-12-07 08:14:22.092797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.069 [2024-12-07 08:14:22.092940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-12-07 08:14:22.092961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.069 [2024-12-07 08:14:22.096877] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.069 [2024-12-07 08:14:22.097099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-12-07 08:14:22.097119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.069 [2024-12-07 08:14:22.100873] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.069 [2024-12-07 08:14:22.101120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-12-07 08:14:22.101156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.069 [2024-12-07 08:14:22.104852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.069 [2024-12-07 08:14:22.105011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-12-07 08:14:22.105031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.069 [2024-12-07 08:14:22.108910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.069 [2024-12-07 08:14:22.109038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-12-07 08:14:22.109058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.069 [2024-12-07 08:14:22.112800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.069 [2024-12-07 08:14:22.112926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-12-07 08:14:22.112946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.069 [2024-12-07 08:14:22.116816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.069 [2024-12-07 08:14:22.116933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-12-07 08:14:22.116954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.069 [2024-12-07 08:14:22.120858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.069 [2024-12-07 08:14:22.120999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-12-07 08:14:22.121019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.069 [2024-12-07 08:14:22.124926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.069 [2024-12-07 08:14:22.125075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-12-07 08:14:22.125096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.069 [2024-12-07 08:14:22.129015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.069 [2024-12-07 08:14:22.129279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-12-07 08:14:22.129301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.069 [2024-12-07 08:14:22.133096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.069 [2024-12-07 08:14:22.133395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-12-07 08:14:22.133437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.069 [2024-12-07 08:14:22.137037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.069 [2024-12-07 08:14:22.137163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-12-07 08:14:22.137184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.069 [2024-12-07 08:14:22.141149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.069 [2024-12-07 08:14:22.141282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-12-07 08:14:22.141303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.069 [2024-12-07 08:14:22.145260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.069 [2024-12-07 08:14:22.145409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-12-07 08:14:22.145430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.069 [2024-12-07 08:14:22.149488] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.069 [2024-12-07 08:14:22.149626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-12-07 08:14:22.149657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.069 [2024-12-07 08:14:22.153702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.069 [2024-12-07 08:14:22.153841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-12-07 08:14:22.153865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.069 [2024-12-07 08:14:22.157747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.069 [2024-12-07 08:14:22.157887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-12-07 08:14:22.157909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.069 [2024-12-07 08:14:22.161790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.069 [2024-12-07 08:14:22.162052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.069 [2024-12-07 08:14:22.162123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.165982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.166252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.166273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.170090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.170278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.170313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.174152] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.174282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.174303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.178116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.178239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.178260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.182175] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.182313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.182345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.186170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.186347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.186368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.190245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.190403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.190424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.194295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.194544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.194566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.198656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.198891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.198912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.202953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.203090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.203110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.207421] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.207536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.207558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.211875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.211990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.212010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.216341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.216448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.216470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.220837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.220991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.221013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.225264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.225423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.225445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.229537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.229850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.229888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.233574] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.233879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.233946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.237664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.237846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.237868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.241749] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.241871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.241892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.245813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.245936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.245958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.249846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.249974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.249995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.253898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.254063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.254083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.257933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.258098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.258119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.262119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.262352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.262394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.266183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.266414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.266434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.270289] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.270434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.070 [2024-12-07 08:14:22.270454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.070 [2024-12-07 08:14:22.274294] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.070 [2024-12-07 08:14:22.274411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.071 [2024-12-07 08:14:22.274431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.071 [2024-12-07 08:14:22.278325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.071 [2024-12-07 08:14:22.278425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.071 [2024-12-07 08:14:22.278445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.071 [2024-12-07 08:14:22.282363] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.071 [2024-12-07 08:14:22.282478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.071 [2024-12-07 08:14:22.282498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.071 [2024-12-07 08:14:22.286404] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.071 [2024-12-07 08:14:22.286529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.071 [2024-12-07 08:14:22.286550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.071 [2024-12-07 08:14:22.290434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.071 [2024-12-07 08:14:22.290564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.071 [2024-12-07 08:14:22.290584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.071 [2024-12-07 08:14:22.294527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.071 [2024-12-07 08:14:22.294736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.071 [2024-12-07 08:14:22.294771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.071 [2024-12-07 08:14:22.298528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.071 [2024-12-07 08:14:22.298726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.071 [2024-12-07 08:14:22.298762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.071 [2024-12-07 08:14:22.302538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.071 [2024-12-07 08:14:22.302700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.071 [2024-12-07 08:14:22.302719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.071 [2024-12-07 08:14:22.306699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.071 [2024-12-07 08:14:22.306812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.071 [2024-12-07 08:14:22.306832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.071 [2024-12-07 08:14:22.310804] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.071 [2024-12-07 08:14:22.310916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.071 [2024-12-07 08:14:22.310936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.071 [2024-12-07 08:14:22.314783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.071 [2024-12-07 08:14:22.314893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.071 [2024-12-07 08:14:22.314913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.071 [2024-12-07 08:14:22.318783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.071 [2024-12-07 08:14:22.318928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.071 [2024-12-07 08:14:22.318949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.071 [2024-12-07 08:14:22.322911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.071 [2024-12-07 08:14:22.323057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.071 [2024-12-07 08:14:22.323077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.071 [2024-12-07 08:14:22.327183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.071 [2024-12-07 08:14:22.327419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.071 [2024-12-07 08:14:22.327439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.071 [2024-12-07 08:14:22.331236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.071 [2024-12-07 08:14:22.331437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.071 [2024-12-07 08:14:22.331473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.071 [2024-12-07 08:14:22.335231] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.071 [2024-12-07 08:14:22.335371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.071 [2024-12-07 08:14:22.335392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.071 [2024-12-07 08:14:22.339418] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.071 [2024-12-07 08:14:22.339569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.071 [2024-12-07 08:14:22.339592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.332 [2024-12-07 08:14:22.343661] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.332 [2024-12-07 08:14:22.343807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-12-07 08:14:22.343829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.332 [2024-12-07 08:14:22.347915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.332 [2024-12-07 08:14:22.348030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-12-07 08:14:22.348051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.332 [2024-12-07 08:14:22.351982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.332 [2024-12-07 08:14:22.352123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-12-07 08:14:22.352145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.332 [2024-12-07 08:14:22.356058] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.332 [2024-12-07 08:14:22.356201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-12-07 08:14:22.356239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.332 [2024-12-07 08:14:22.360175] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.332 [2024-12-07 08:14:22.360428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-12-07 08:14:22.360449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.332 [2024-12-07 08:14:22.364185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.332 [2024-12-07 08:14:22.364471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-12-07 08:14:22.364511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.332 [2024-12-07 08:14:22.368272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.332 [2024-12-07 08:14:22.368389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-12-07 08:14:22.368410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.332 [2024-12-07 08:14:22.372297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.332 [2024-12-07 08:14:22.372399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-12-07 08:14:22.372420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.332 [2024-12-07 08:14:22.376386] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.332 [2024-12-07 08:14:22.376494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-12-07 08:14:22.376514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.332 [2024-12-07 08:14:22.380468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.332 [2024-12-07 08:14:22.380586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-12-07 08:14:22.380621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.332 [2024-12-07 08:14:22.384493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.332 [2024-12-07 08:14:22.384641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-12-07 08:14:22.384661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.332 [2024-12-07 08:14:22.388639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.332 [2024-12-07 08:14:22.388781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.332 [2024-12-07 08:14:22.388802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.332 [2024-12-07 08:14:22.392723] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.332 [2024-12-07 08:14:22.392943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.392964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.396921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.397150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.397169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.401019] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.401164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.401185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.405080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.405195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.405216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.409047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.409181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.409201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.413056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.413171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.413192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.417060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.417207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.417228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.421142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.421310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.421331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.425159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.425425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.425462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.429154] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.429395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.429419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.433264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.433409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.433429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.437162] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.437311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.437333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.441121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.441234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.441269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.445105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.445220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.445257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.449147] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.449313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.449333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.453137] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.453306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.453327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.457264] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.457503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.457547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.461272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.461478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.461515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.465294] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.465428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.465448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.469314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.469424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.469444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.473258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.473382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.473403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.477277] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.477375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.477395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.481367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.481494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.481514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.485403] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.485533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.485553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.489387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.489628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.489669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.493356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.493555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.493591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.497412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.497555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.333 [2024-12-07 08:14:22.497576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.333 [2024-12-07 08:14:22.501428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.333 [2024-12-07 08:14:22.501564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.501584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.505420] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.505528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.505548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.509377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.509487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.509507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.513331] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.513463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.513482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.517350] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.517478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.517498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.521459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.521741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.521766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.525443] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.525717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.525745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.529444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.529586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.529606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.533433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.533532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.533569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.537420] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.537530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.537562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.541397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.541503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.541524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.545395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.545541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.545561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.549376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.549511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.549532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.553385] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.553654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.553704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.557329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.557558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.557580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.561402] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.561551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.561572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.565357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.565471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.565492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.569278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.569401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.569422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.573294] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.573394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.573415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.577263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.577416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.577437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.581384] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.581515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.581536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.585361] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.585614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.585635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.589323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.589511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.589548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.593268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.593447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.593483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.597145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.597272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.597294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.334 [2024-12-07 08:14:22.601288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.334 [2024-12-07 08:14:22.601408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.334 [2024-12-07 08:14:22.601430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.595 [2024-12-07 08:14:22.605408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.595 [2024-12-07 08:14:22.605542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.595 [2024-12-07 08:14:22.605580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.609441] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.609623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.609645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.613772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.613914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.613938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.617956] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.618201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.618222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.622026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.622283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.622319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.626147] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.626317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.626338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.630305] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.630445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.630466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.634344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.634445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.634465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.638344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.638443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.638464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.642441] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.642578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.642597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.646532] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.646693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.646713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.650712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.650931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.650951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.654782] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.654981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.655001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.658839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.658988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.659008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.662937] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.663075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.663095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.666918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.667043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.667063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.670994] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.671136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.671156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.675188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.675322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.675342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.679433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.679552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.679572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.683735] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.683965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.683985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.687789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.688005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.688025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.691825] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.691965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.691986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.695924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.696047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.696068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.699945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.700057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.700077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.703964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.704079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.704099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.708029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.708168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.708188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.712143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.712326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.596 [2024-12-07 08:14:22.712348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.596 [2024-12-07 08:14:22.716365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.596 [2024-12-07 08:14:22.716599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.716636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.720465] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.720763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.720793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.724537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.724683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.724703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.728573] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.728737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.728757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.732668] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.732786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.732806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.736646] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.736744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.736764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.740751] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.740904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.740924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.744855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.745005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.745025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.749036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.749312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.749346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.753076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.753325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.753375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.757094] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.757242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.757276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.761118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.761263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.761283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.765135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.765276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.765296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.769080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.769192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.769213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.773091] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.773232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.773252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.777076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.777221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.777241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.781291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.781521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.781565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.785266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.785454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.785506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.789203] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.789380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.789401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.793199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.793338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.793358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.797200] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.797337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.797357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.801267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.801361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.801381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.805182] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.805355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.805375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.809184] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.809360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.809380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.813318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.813546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.813589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.817144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.817382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.817402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.821130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.597 [2024-12-07 08:14:22.821312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.597 [2024-12-07 08:14:22.821337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.597 [2024-12-07 08:14:22.825151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.598 [2024-12-07 08:14:22.825285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-07 08:14:22.825306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.598 [2024-12-07 08:14:22.829081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.598 [2024-12-07 08:14:22.829205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-07 08:14:22.829254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.598 [2024-12-07 08:14:22.833166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.598 [2024-12-07 08:14:22.833309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-07 08:14:22.833331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.598 [2024-12-07 08:14:22.837088] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.598 [2024-12-07 08:14:22.837242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-07 08:14:22.837263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.598 [2024-12-07 08:14:22.841084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.598 [2024-12-07 08:14:22.841241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-07 08:14:22.841261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.598 [2024-12-07 08:14:22.845226] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.598 [2024-12-07 08:14:22.845450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-07 08:14:22.845520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.598 [2024-12-07 08:14:22.849251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.598 [2024-12-07 08:14:22.849445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-07 08:14:22.849466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.598 [2024-12-07 08:14:22.853173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.598 [2024-12-07 08:14:22.853354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-07 08:14:22.853375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.598 [2024-12-07 08:14:22.857141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.598 [2024-12-07 08:14:22.857302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-07 08:14:22.857323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.598 [2024-12-07 08:14:22.861270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.598 [2024-12-07 08:14:22.861382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-07 08:14:22.861403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.598 [2024-12-07 08:14:22.865360] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.598 [2024-12-07 08:14:22.865460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.598 [2024-12-07 08:14:22.865482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.858 [2024-12-07 08:14:22.869656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.858 [2024-12-07 08:14:22.869848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-12-07 08:14:22.869872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.858 [2024-12-07 08:14:22.873501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.858 [2024-12-07 08:14:22.873644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-12-07 08:14:22.873664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.858 [2024-12-07 08:14:22.878122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.858 [2024-12-07 08:14:22.878392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-12-07 08:14:22.878472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.858 [2024-12-07 08:14:22.882442] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.858 [2024-12-07 08:14:22.882699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-12-07 08:14:22.882767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.858 [2024-12-07 08:14:22.886550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.858 [2024-12-07 08:14:22.886737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-12-07 08:14:22.886758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.858 [2024-12-07 08:14:22.890687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.858 [2024-12-07 08:14:22.890801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-12-07 08:14:22.890821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.858 [2024-12-07 08:14:22.894635] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.858 [2024-12-07 08:14:22.894764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-12-07 08:14:22.894784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.858 [2024-12-07 08:14:22.898659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.858 [2024-12-07 08:14:22.898772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-12-07 08:14:22.898792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.858 [2024-12-07 08:14:22.902705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.858 [2024-12-07 08:14:22.902873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-12-07 08:14:22.902893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.858 [2024-12-07 08:14:22.906762] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.858 [2024-12-07 08:14:22.906925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-12-07 08:14:22.906945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.858 [2024-12-07 08:14:22.910953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.858 [2024-12-07 08:14:22.911172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-12-07 08:14:22.911192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.858 [2024-12-07 08:14:22.914966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.858 [2024-12-07 08:14:22.915170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-12-07 08:14:22.915190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.858 [2024-12-07 08:14:22.918904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.858 [2024-12-07 08:14:22.919110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.858 [2024-12-07 08:14:22.919130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.858 [2024-12-07 08:14:22.922990] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.858 [2024-12-07 08:14:22.923117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:22.923137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:22.927056] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:22.927186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:22.927206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:22.931048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:22.931181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:22.931202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:22.935158] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:22.935337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:22.935358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:22.939107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:22.939284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:22.939306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:22.943279] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:22.943530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:22.943568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:22.947353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:22.947550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:22.947601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:22.951369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:22.951544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:22.951565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:22.955317] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:22.955422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:22.955442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:22.959232] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:22.959328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:22.959347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:22.963230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:22.963343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:22.963363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:22.967229] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:22.967379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:22.967399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:22.971312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:22.971458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:22.971478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:22.975445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:22.975668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:22.975741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:22.979396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:22.979627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:22.979681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:22.983684] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:22.983896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:22.983916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:22.987947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:22.988089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:22.988109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:22.992233] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:22.992338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:22.992360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:22.996460] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:22.996614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:22.996635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:23.000863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:23.001037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:23.001058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:23.005198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:23.005328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:23.005350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:23.009463] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:23.009744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:23.009769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:23.013550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:23.013860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:23.013893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:23.017670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:23.017876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:23.017898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:23.021773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:23.021897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:23.021919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:23.025997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:23.026154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:23.026174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:23.030156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.859 [2024-12-07 08:14:23.030281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.859 [2024-12-07 08:14:23.030303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.859 [2024-12-07 08:14:23.034304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.034463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.034484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.038319] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.038457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.038478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.042691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.042913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.042933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.046811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.047046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.047066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.051021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.051252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.051290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.055521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.055642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.055664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.059732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.059844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.059866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.063816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.063930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.063951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.067963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.068133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.068153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.073126] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.073320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.073354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.077295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.077529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.077567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.081166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.081425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.081446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.085207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.085402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.085423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.089412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.089514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.089535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.093389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.093513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.093534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.097394] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.097507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.097528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.101447] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.101635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.101656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.105725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.105881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.105903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.109648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.109910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.109947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.113586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.113841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.113863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.117543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.117781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.117803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.121840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.121950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.121972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.125865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.126003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.126024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.860 [2024-12-07 08:14:23.130005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:11.860 [2024-12-07 08:14:23.130187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.860 [2024-12-07 08:14:23.130223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.120 [2024-12-07 08:14:23.134513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.120 [2024-12-07 08:14:23.134692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.120 [2024-12-07 08:14:23.134716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.120 [2024-12-07 08:14:23.138786] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.120 [2024-12-07 08:14:23.138927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.120 [2024-12-07 08:14:23.138948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.120 [2024-12-07 08:14:23.143015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.120 [2024-12-07 08:14:23.143253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.120 [2024-12-07 08:14:23.143274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.120 [2024-12-07 08:14:23.147154] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.120 [2024-12-07 08:14:23.147385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.120 [2024-12-07 08:14:23.147405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.120 [2024-12-07 08:14:23.151391] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.120 [2024-12-07 08:14:23.151607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.120 [2024-12-07 08:14:23.151644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.120 [2024-12-07 08:14:23.155576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.120 [2024-12-07 08:14:23.155693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.120 [2024-12-07 08:14:23.155714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.120 [2024-12-07 08:14:23.159620] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.120 [2024-12-07 08:14:23.159735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.120 [2024-12-07 08:14:23.159755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.120 [2024-12-07 08:14:23.163706] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.120 [2024-12-07 08:14:23.163823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.120 [2024-12-07 08:14:23.163844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.120 [2024-12-07 08:14:23.168060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.120 [2024-12-07 08:14:23.168228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.120 [2024-12-07 08:14:23.168249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.120 [2024-12-07 08:14:23.172283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.120 [2024-12-07 08:14:23.172480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.120 [2024-12-07 08:14:23.172501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.176587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.176816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.176837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.180690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.180907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.180927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.184773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.184962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.184983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.188884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.189000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.189021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.192912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.193043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.193064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.197012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.197129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.197149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.201054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.201221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.201268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.205139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.205298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.205319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.209267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.209492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.209512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.213217] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.213432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.213452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.217298] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.217495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.217517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.221469] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.221622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.221642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.225731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.225824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.225846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.229800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.229902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.229924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.234257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.234419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.234447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.238673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.238808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.238828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.243153] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.243414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.243447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.247528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.247764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.247789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.251855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.252041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.252061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.256036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.256172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.256192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.260151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.260300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.260323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.264297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.264434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.264455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.268353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.268524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.268544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.272376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.272552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.272573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.276664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.276891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.276911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.280735] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.280970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.281012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.121 [2024-12-07 08:14:23.284772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.121 [2024-12-07 08:14:23.284961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.121 [2024-12-07 08:14:23.284981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.288797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.288924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.288944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.292823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.292954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.292975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.296910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.297023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.297043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.300983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.301152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.301171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.304929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.305066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.305086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.309022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.309239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.309271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.312927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.313143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.313163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.316908] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.317093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.317113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.320970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.321111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.321130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.324933] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.325060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.325079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.328988] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.329100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.329120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.333021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.333185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.333205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.337057] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.337221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.337254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.341108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.341352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.341389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.345098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.345367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.345427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.349090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.349297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.349318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.353082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.353208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.353229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.357098] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.357236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.357270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.361004] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.361120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.361140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.365013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.365188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.365208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.369075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.369214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.369234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.373112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.373346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.373367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.377078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.377307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.377327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.381035] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.381217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.381238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.385038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.385152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.385172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.388961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.389073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.122 [2024-12-07 08:14:23.389093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.122 [2024-12-07 08:14:23.393102] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.122 [2024-12-07 08:14:23.393237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.123 [2024-12-07 08:14:23.393283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.382 [2024-12-07 08:14:23.397283] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.382 [2024-12-07 08:14:23.397455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.382 [2024-12-07 08:14:23.397478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.382 [2024-12-07 08:14:23.401446] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.382 [2024-12-07 08:14:23.401600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.382 [2024-12-07 08:14:23.401622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.382 [2024-12-07 08:14:23.405408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.382 [2024-12-07 08:14:23.405647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.382 [2024-12-07 08:14:23.405731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.382 [2024-12-07 08:14:23.409297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.382 [2024-12-07 08:14:23.409506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.382 [2024-12-07 08:14:23.409526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.382 [2024-12-07 08:14:23.413190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.382 [2024-12-07 08:14:23.413391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.382 [2024-12-07 08:14:23.413411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.382 [2024-12-07 08:14:23.417223] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.382 [2024-12-07 08:14:23.417363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.382 [2024-12-07 08:14:23.417383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.382 [2024-12-07 08:14:23.421105] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.382 [2024-12-07 08:14:23.421220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.382 [2024-12-07 08:14:23.421253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.382 [2024-12-07 08:14:23.425026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.382 [2024-12-07 08:14:23.425138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.382 [2024-12-07 08:14:23.425158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.429106] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.429293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.429315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.433068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.433211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.433232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.437188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.437421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.437441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.441120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.441374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.441439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.445047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.445214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.445250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.449143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.449328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.449349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.453005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.453125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.453146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.456999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.457112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.457132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.461014] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.461178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.461198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.464979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.465118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.465138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.469104] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.469355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.469406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.473054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.473297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.473318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.477080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.477283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.477304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.481009] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.481138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.481158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.485026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.485141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.485161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.489008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.489120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.489140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.493013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.493178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.493198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.497013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.497156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.497176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.501136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.501386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.501423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.505014] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.505228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.505264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.509080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.509305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.509326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.513023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.513140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.513160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.517045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.517179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.517200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.521103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.521250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.521272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.525078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.525285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.525307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.529044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.529183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.529219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.533179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.383 [2024-12-07 08:14:23.533422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.383 [2024-12-07 08:14:23.533465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.383 [2024-12-07 08:14:23.537221] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.537445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.537466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.541049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.541272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.541294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.545012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.545126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.545146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.549059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.549189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.549225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.552976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.553089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.553109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.557038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.557223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.557256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.560992] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.561137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.561157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.565086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.565345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.565389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.569048] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.569293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.569313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.573082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.573292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.573314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.577215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.577340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.577361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.581137] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.581267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.581287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.585070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.585188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.585208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.589061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.589257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.589278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.592941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.593080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.593099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.597068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.597313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.597334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.600965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.601156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.601176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.604855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.605050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.605070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.608872] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.609008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.609028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.612756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.612869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.612888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.616663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.616774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.616794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.620663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.620832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.620852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.624530] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.624689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.624708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.628558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.628784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.628804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.632451] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.632680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.632710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.636401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.636592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.636613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.384 [2024-12-07 08:14:23.640375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.384 [2024-12-07 08:14:23.640488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.384 [2024-12-07 08:14:23.640508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.385 [2024-12-07 08:14:23.644328] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.385 [2024-12-07 08:14:23.644440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.385 [2024-12-07 08:14:23.644461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.385 [2024-12-07 08:14:23.648292] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.385 [2024-12-07 08:14:23.648406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.385 [2024-12-07 08:14:23.648426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:12.385 [2024-12-07 08:14:23.652397] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.385 [2024-12-07 08:14:23.652587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.385 [2024-12-07 08:14:23.652609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:12.643 [2024-12-07 08:14:23.656679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.643 [2024-12-07 08:14:23.656893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.643 [2024-12-07 08:14:23.656915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:12.643 [2024-12-07 08:14:23.660819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x108b280) with pdu=0x2000190fef90 00:23:12.643 [2024-12-07 08:14:23.661082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:12.643 [2024-12-07 08:14:23.661148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:12.643 00:23:12.643 Latency(us) 00:23:12.643 [2024-12-07T08:14:23.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.643 [2024-12-07T08:14:23.919Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:12.643 nvme0n1 : 2.00 7564.43 945.55 0.00 0.00 2110.36 1630.95 6821.70 00:23:12.643 [2024-12-07T08:14:23.919Z] =================================================================================================================== 00:23:12.643 [2024-12-07T08:14:23.919Z] Total : 7564.43 945.55 0.00 0.00 2110.36 1630.95 6821.70 00:23:12.643 0 00:23:12.643 08:14:23 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:12.643 08:14:23 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:12.643 08:14:23 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:12.643 | .driver_specific 00:23:12.643 | .nvme_error 00:23:12.643 | .status_code 00:23:12.643 | .command_transient_transport_error' 00:23:12.643 08:14:23 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:12.902 08:14:23 -- host/digest.sh@71 -- # (( 488 > 0 )) 00:23:12.902 08:14:23 -- host/digest.sh@73 -- # killprocess 98069 00:23:12.902 08:14:23 -- common/autotest_common.sh@936 -- # '[' -z 98069 ']' 00:23:12.902 08:14:23 -- common/autotest_common.sh@940 -- # kill -0 98069 00:23:12.902 08:14:23 -- common/autotest_common.sh@941 -- # uname 00:23:12.902 08:14:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:12.902 08:14:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98069 00:23:12.902 08:14:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:12.902 08:14:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:12.902 killing process with pid 98069 00:23:12.902 08:14:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98069' 00:23:12.902 Received shutdown signal, test time was about 2.000000 seconds 00:23:12.902 00:23:12.902 Latency(us) 00:23:12.902 [2024-12-07T08:14:24.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.902 [2024-12-07T08:14:24.178Z] =================================================================================================================== 00:23:12.902 [2024-12-07T08:14:24.178Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:12.902 08:14:23 -- common/autotest_common.sh@955 -- # kill 98069 00:23:12.902 08:14:23 -- common/autotest_common.sh@960 -- # wait 98069 00:23:13.161 08:14:24 -- host/digest.sh@115 -- # killprocess 97761 00:23:13.161 08:14:24 -- common/autotest_common.sh@936 -- # '[' -z 97761 ']' 00:23:13.161 08:14:24 -- common/autotest_common.sh@940 -- # kill -0 97761 00:23:13.161 08:14:24 -- common/autotest_common.sh@941 -- # uname 00:23:13.161 08:14:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:13.161 08:14:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97761 00:23:13.161 08:14:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:13.161 killing process with pid 97761 00:23:13.161 08:14:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:13.161 08:14:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97761' 00:23:13.161 08:14:24 -- common/autotest_common.sh@955 -- # kill 97761 00:23:13.161 08:14:24 -- common/autotest_common.sh@960 -- # wait 97761 00:23:13.161 00:23:13.161 real 0m18.401s 00:23:13.161 user 0m35.081s 00:23:13.161 sys 0m4.763s 00:23:13.161 08:14:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:13.161 08:14:24 -- common/autotest_common.sh@10 -- # set +x 00:23:13.161 ************************************ 00:23:13.161 END TEST nvmf_digest_error 00:23:13.161 ************************************ 00:23:13.420 08:14:24 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:23:13.420 08:14:24 -- host/digest.sh@139 -- # nvmftestfini 00:23:13.420 08:14:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:13.421 08:14:24 -- nvmf/common.sh@116 -- # sync 00:23:13.421 08:14:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:13.421 08:14:24 -- nvmf/common.sh@119 -- # set +e 00:23:13.421 08:14:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:13.421 08:14:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:13.421 rmmod nvme_tcp 00:23:13.421 rmmod nvme_fabrics 00:23:13.421 rmmod nvme_keyring 00:23:13.421 08:14:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:13.421 08:14:24 -- nvmf/common.sh@123 -- # set -e 00:23:13.421 08:14:24 -- nvmf/common.sh@124 -- # return 0 00:23:13.421 08:14:24 -- nvmf/common.sh@477 -- # '[' -n 97761 ']' 00:23:13.421 08:14:24 -- nvmf/common.sh@478 -- # killprocess 97761 00:23:13.421 08:14:24 -- common/autotest_common.sh@936 -- # '[' -z 97761 ']' 00:23:13.421 08:14:24 -- common/autotest_common.sh@940 -- # kill -0 97761 00:23:13.421 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (97761) - No such process 00:23:13.421 Process with pid 97761 is not found 00:23:13.421 08:14:24 -- common/autotest_common.sh@963 -- # echo 'Process with pid 97761 is not found' 00:23:13.421 08:14:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:13.421 08:14:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:13.421 08:14:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:13.421 08:14:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:13.421 08:14:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:13.421 08:14:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.421 08:14:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.421 08:14:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.421 08:14:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:13.421 00:23:13.421 real 0m35.834s 00:23:13.421 user 1m7.181s 00:23:13.421 sys 0m9.699s 00:23:13.421 08:14:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:13.421 08:14:24 -- common/autotest_common.sh@10 -- # set +x 00:23:13.421 ************************************ 00:23:13.421 END TEST nvmf_digest 00:23:13.421 ************************************ 00:23:13.421 08:14:24 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:23:13.421 08:14:24 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:23:13.421 08:14:24 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:13.421 08:14:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:13.421 08:14:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:13.421 08:14:24 -- common/autotest_common.sh@10 -- # set +x 00:23:13.421 ************************************ 00:23:13.421 START TEST nvmf_mdns_discovery 00:23:13.421 ************************************ 00:23:13.421 08:14:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:13.682 * Looking for test storage... 00:23:13.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:13.682 08:14:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:13.682 08:14:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:13.682 08:14:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:13.682 08:14:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:13.682 08:14:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:13.682 08:14:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:13.682 08:14:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:13.682 08:14:24 -- scripts/common.sh@335 -- # IFS=.-: 00:23:13.682 08:14:24 -- scripts/common.sh@335 -- # read -ra ver1 00:23:13.682 08:14:24 -- scripts/common.sh@336 -- # IFS=.-: 00:23:13.682 08:14:24 -- scripts/common.sh@336 -- # read -ra ver2 00:23:13.682 08:14:24 -- scripts/common.sh@337 -- # local 'op=<' 00:23:13.682 08:14:24 -- scripts/common.sh@339 -- # ver1_l=2 00:23:13.682 08:14:24 -- scripts/common.sh@340 -- # ver2_l=1 00:23:13.682 08:14:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:13.682 08:14:24 -- scripts/common.sh@343 -- # case "$op" in 00:23:13.682 08:14:24 -- scripts/common.sh@344 -- # : 1 00:23:13.682 08:14:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:13.682 08:14:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:13.682 08:14:24 -- scripts/common.sh@364 -- # decimal 1 00:23:13.682 08:14:24 -- scripts/common.sh@352 -- # local d=1 00:23:13.682 08:14:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:13.682 08:14:24 -- scripts/common.sh@354 -- # echo 1 00:23:13.682 08:14:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:13.682 08:14:24 -- scripts/common.sh@365 -- # decimal 2 00:23:13.682 08:14:24 -- scripts/common.sh@352 -- # local d=2 00:23:13.682 08:14:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:13.682 08:14:24 -- scripts/common.sh@354 -- # echo 2 00:23:13.682 08:14:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:13.682 08:14:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:13.682 08:14:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:13.682 08:14:24 -- scripts/common.sh@367 -- # return 0 00:23:13.682 08:14:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:13.682 08:14:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:13.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.682 --rc genhtml_branch_coverage=1 00:23:13.682 --rc genhtml_function_coverage=1 00:23:13.682 --rc genhtml_legend=1 00:23:13.682 --rc geninfo_all_blocks=1 00:23:13.682 --rc geninfo_unexecuted_blocks=1 00:23:13.682 00:23:13.682 ' 00:23:13.682 08:14:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:13.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.682 --rc genhtml_branch_coverage=1 00:23:13.682 --rc genhtml_function_coverage=1 00:23:13.682 --rc genhtml_legend=1 00:23:13.682 --rc geninfo_all_blocks=1 00:23:13.682 --rc geninfo_unexecuted_blocks=1 00:23:13.682 00:23:13.682 ' 00:23:13.682 08:14:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:13.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.682 --rc genhtml_branch_coverage=1 00:23:13.682 --rc genhtml_function_coverage=1 00:23:13.682 --rc genhtml_legend=1 00:23:13.682 --rc geninfo_all_blocks=1 00:23:13.682 --rc geninfo_unexecuted_blocks=1 00:23:13.682 00:23:13.682 ' 00:23:13.682 08:14:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:13.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.682 --rc genhtml_branch_coverage=1 00:23:13.682 --rc genhtml_function_coverage=1 00:23:13.682 --rc genhtml_legend=1 00:23:13.682 --rc geninfo_all_blocks=1 00:23:13.682 --rc geninfo_unexecuted_blocks=1 00:23:13.682 00:23:13.682 ' 00:23:13.682 08:14:24 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:13.682 08:14:24 -- nvmf/common.sh@7 -- # uname -s 00:23:13.682 08:14:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.682 08:14:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.682 08:14:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.682 08:14:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.682 08:14:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.682 08:14:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.682 08:14:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.682 08:14:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.682 08:14:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.682 08:14:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.682 08:14:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:23:13.682 08:14:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:23:13.682 08:14:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.682 08:14:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.682 08:14:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:13.682 08:14:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:13.682 08:14:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.682 08:14:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.682 08:14:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.682 08:14:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.682 08:14:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.682 08:14:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.682 08:14:24 -- paths/export.sh@5 -- # export PATH 00:23:13.682 08:14:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.682 08:14:24 -- nvmf/common.sh@46 -- # : 0 00:23:13.682 08:14:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:13.682 08:14:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:13.682 08:14:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:13.682 08:14:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.682 08:14:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.682 08:14:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:13.682 08:14:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:13.682 08:14:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:13.682 08:14:24 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:23:13.682 08:14:24 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:23:13.682 08:14:24 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:13.682 08:14:24 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:13.682 08:14:24 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:23:13.682 08:14:24 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:13.682 08:14:24 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:23:13.682 08:14:24 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:23:13.682 08:14:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:13.682 08:14:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.682 08:14:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:13.683 08:14:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:13.683 08:14:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:13.683 08:14:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.683 08:14:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.683 08:14:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.683 08:14:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:13.683 08:14:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:13.683 08:14:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:13.683 08:14:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:13.683 08:14:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:13.683 08:14:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:13.683 08:14:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.683 08:14:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.683 08:14:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:13.683 08:14:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:13.683 08:14:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:13.683 08:14:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:13.683 08:14:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:13.683 08:14:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.683 08:14:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:13.683 08:14:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:13.683 08:14:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:13.683 08:14:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:13.683 08:14:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:13.683 08:14:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:13.683 Cannot find device "nvmf_tgt_br" 00:23:13.683 08:14:24 -- nvmf/common.sh@154 -- # true 00:23:13.683 08:14:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:13.683 Cannot find device "nvmf_tgt_br2" 00:23:13.683 08:14:24 -- nvmf/common.sh@155 -- # true 00:23:13.683 08:14:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:13.683 08:14:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:13.683 Cannot find device "nvmf_tgt_br" 00:23:13.683 08:14:24 -- nvmf/common.sh@157 -- # true 00:23:13.683 08:14:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:13.683 Cannot find device "nvmf_tgt_br2" 00:23:13.683 08:14:24 -- nvmf/common.sh@158 -- # true 00:23:13.683 08:14:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:13.683 08:14:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:13.942 08:14:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:13.942 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:13.942 08:14:24 -- nvmf/common.sh@161 -- # true 00:23:13.942 08:14:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:13.942 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:13.942 08:14:24 -- nvmf/common.sh@162 -- # true 00:23:13.942 08:14:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:13.942 08:14:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:13.942 08:14:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:13.942 08:14:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:13.942 08:14:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:13.942 08:14:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:13.942 08:14:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:13.942 08:14:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:13.942 08:14:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:13.942 08:14:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:13.942 08:14:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:13.942 08:14:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:13.942 08:14:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:13.942 08:14:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:13.942 08:14:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:13.942 08:14:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:13.942 08:14:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:13.942 08:14:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:13.942 08:14:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:13.942 08:14:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:13.942 08:14:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:13.942 08:14:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:13.942 08:14:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:13.942 08:14:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:13.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:23:13.942 00:23:13.942 --- 10.0.0.2 ping statistics --- 00:23:13.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.942 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:23:13.942 08:14:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:13.942 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:13.942 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:23:13.942 00:23:13.942 --- 10.0.0.3 ping statistics --- 00:23:13.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.942 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:23:13.943 08:14:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:13.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:23:13.943 00:23:13.943 --- 10.0.0.1 ping statistics --- 00:23:13.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.943 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:23:13.943 08:14:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.943 08:14:25 -- nvmf/common.sh@421 -- # return 0 00:23:13.943 08:14:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:13.943 08:14:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.943 08:14:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:13.943 08:14:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:13.943 08:14:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.943 08:14:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:13.943 08:14:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:13.943 08:14:25 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:13.943 08:14:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:13.943 08:14:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:13.943 08:14:25 -- common/autotest_common.sh@10 -- # set +x 00:23:13.943 08:14:25 -- nvmf/common.sh@469 -- # nvmfpid=98371 00:23:13.943 08:14:25 -- nvmf/common.sh@470 -- # waitforlisten 98371 00:23:13.943 08:14:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:13.943 08:14:25 -- common/autotest_common.sh@829 -- # '[' -z 98371 ']' 00:23:13.943 08:14:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.943 08:14:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.943 08:14:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.943 08:14:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.943 08:14:25 -- common/autotest_common.sh@10 -- # set +x 00:23:14.201 [2024-12-07 08:14:25.237572] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:14.201 [2024-12-07 08:14:25.237703] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.201 [2024-12-07 08:14:25.377050] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.201 [2024-12-07 08:14:25.454650] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:14.201 [2024-12-07 08:14:25.454810] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.201 [2024-12-07 08:14:25.454823] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.201 [2024-12-07 08:14:25.454831] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.201 [2024-12-07 08:14:25.454861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.459 08:14:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.459 08:14:25 -- common/autotest_common.sh@862 -- # return 0 00:23:14.459 08:14:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:14.459 08:14:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:14.459 08:14:25 -- common/autotest_common.sh@10 -- # set +x 00:23:14.459 08:14:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.459 08:14:25 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:23:14.459 08:14:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.459 08:14:25 -- common/autotest_common.sh@10 -- # set +x 00:23:14.459 08:14:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.459 08:14:25 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:23:14.459 08:14:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.459 08:14:25 -- common/autotest_common.sh@10 -- # set +x 00:23:14.459 08:14:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.459 08:14:25 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:14.459 08:14:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.459 08:14:25 -- common/autotest_common.sh@10 -- # set +x 00:23:14.459 [2024-12-07 08:14:25.636966] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.459 08:14:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.459 08:14:25 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:14.459 08:14:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.459 08:14:25 -- common/autotest_common.sh@10 -- # set +x 00:23:14.459 [2024-12-07 08:14:25.649132] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:14.459 08:14:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.459 08:14:25 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:14.459 08:14:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.459 08:14:25 -- common/autotest_common.sh@10 -- # set +x 00:23:14.459 null0 00:23:14.459 08:14:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.459 08:14:25 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:14.459 08:14:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.459 08:14:25 -- common/autotest_common.sh@10 -- # set +x 00:23:14.459 null1 00:23:14.459 08:14:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.459 08:14:25 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:23:14.459 08:14:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.459 08:14:25 -- common/autotest_common.sh@10 -- # set +x 00:23:14.459 null2 00:23:14.459 08:14:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.459 08:14:25 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:23:14.459 08:14:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.459 08:14:25 -- common/autotest_common.sh@10 -- # set +x 00:23:14.459 null3 00:23:14.459 08:14:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.459 08:14:25 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:23:14.459 08:14:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.459 08:14:25 -- common/autotest_common.sh@10 -- # set +x 00:23:14.459 08:14:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.459 08:14:25 -- host/mdns_discovery.sh@47 -- # hostpid=98413 00:23:14.459 08:14:25 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:14.459 08:14:25 -- host/mdns_discovery.sh@48 -- # waitforlisten 98413 /tmp/host.sock 00:23:14.459 08:14:25 -- common/autotest_common.sh@829 -- # '[' -z 98413 ']' 00:23:14.459 08:14:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:14.459 08:14:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.459 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:14.459 08:14:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:14.459 08:14:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.459 08:14:25 -- common/autotest_common.sh@10 -- # set +x 00:23:14.717 [2024-12-07 08:14:25.744306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:14.717 [2024-12-07 08:14:25.744393] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98413 ] 00:23:14.717 [2024-12-07 08:14:25.876606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.717 [2024-12-07 08:14:25.950797] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:14.717 [2024-12-07 08:14:25.951007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.649 08:14:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:15.649 08:14:26 -- common/autotest_common.sh@862 -- # return 0 00:23:15.649 08:14:26 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:23:15.649 08:14:26 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:23:15.649 08:14:26 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:23:15.649 08:14:26 -- host/mdns_discovery.sh@57 -- # avahipid=98443 00:23:15.649 08:14:26 -- host/mdns_discovery.sh@58 -- # sleep 1 00:23:15.649 08:14:26 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:23:15.649 08:14:26 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:23:15.649 Process 1061 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:23:15.649 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:23:15.649 Successfully dropped root privileges. 00:23:15.649 avahi-daemon 0.8 starting up. 00:23:15.649 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:23:15.649 Successfully called chroot(). 00:23:15.649 Successfully dropped remaining capabilities. 00:23:15.649 No service file found in /etc/avahi/services. 00:23:15.649 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:15.649 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:23:16.609 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:16.609 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:23:16.609 Network interface enumeration completed. 00:23:16.609 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:23:16.609 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:23:16.609 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:23:16.609 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:23:16.609 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 194824965. 00:23:16.867 08:14:27 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:16.867 08:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.867 08:14:27 -- common/autotest_common.sh@10 -- # set +x 00:23:16.867 08:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.867 08:14:27 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:16.867 08:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.867 08:14:27 -- common/autotest_common.sh@10 -- # set +x 00:23:16.867 08:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.867 08:14:27 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:23:16.867 08:14:27 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:23:16.867 08:14:27 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.867 08:14:27 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:16.867 08:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.867 08:14:27 -- host/mdns_discovery.sh@68 -- # xargs 00:23:16.867 08:14:27 -- common/autotest_common.sh@10 -- # set +x 00:23:16.867 08:14:27 -- host/mdns_discovery.sh@68 -- # sort 00:23:16.867 08:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.867 08:14:27 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:23:16.867 08:14:27 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:23:16.867 08:14:27 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.867 08:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.867 08:14:27 -- common/autotest_common.sh@10 -- # set +x 00:23:16.867 08:14:27 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:16.867 08:14:27 -- host/mdns_discovery.sh@64 -- # sort 00:23:16.867 08:14:27 -- host/mdns_discovery.sh@64 -- # xargs 00:23:16.867 08:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.867 08:14:28 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:23:16.867 08:14:28 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:16.867 08:14:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.867 08:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:16.867 08:14:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.867 08:14:28 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:23:16.867 08:14:28 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.867 08:14:28 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:16.867 08:14:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.867 08:14:28 -- host/mdns_discovery.sh@68 -- # sort 00:23:16.867 08:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:16.867 08:14:28 -- host/mdns_discovery.sh@68 -- # xargs 00:23:16.867 08:14:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.867 08:14:28 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:23:16.867 08:14:28 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:23:16.867 08:14:28 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.867 08:14:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.867 08:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:16.867 08:14:28 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:16.867 08:14:28 -- host/mdns_discovery.sh@64 -- # sort 00:23:16.867 08:14:28 -- host/mdns_discovery.sh@64 -- # xargs 00:23:16.867 08:14:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:17.125 08:14:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.125 08:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:17.125 08:14:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.125 08:14:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.125 08:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@68 -- # sort 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@68 -- # xargs 00:23:17.125 08:14:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.125 [2024-12-07 08:14:28.208151] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@64 -- # xargs 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@64 -- # sort 00:23:17.125 08:14:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.125 08:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:17.125 08:14:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:17.125 08:14:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.125 08:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:17.125 [2024-12-07 08:14:28.265879] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.125 08:14:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:17.125 08:14:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.125 08:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:17.125 08:14:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:23:17.125 08:14:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.125 08:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:17.125 08:14:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:23:17.125 08:14:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.125 08:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:17.125 08:14:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:23:17.125 08:14:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.125 08:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:17.125 08:14:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:23:17.125 08:14:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.125 08:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:17.125 [2024-12-07 08:14:28.305784] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:17.125 08:14:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:17.125 08:14:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.125 08:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:17.125 [2024-12-07 08:14:28.313745] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:17.125 08:14:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=98494 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@125 -- # sleep 5 00:23:17.125 08:14:28 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:23:18.057 [2024-12-07 08:14:29.108155] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:18.057 Established under name 'CDC' 00:23:18.314 [2024-12-07 08:14:29.508162] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:18.315 [2024-12-07 08:14:29.508188] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:18.315 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:18.315 cookie is 0 00:23:18.315 is_local: 1 00:23:18.315 our_own: 0 00:23:18.315 wide_area: 0 00:23:18.315 multicast: 1 00:23:18.315 cached: 1 00:23:18.572 [2024-12-07 08:14:29.608154] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:18.572 [2024-12-07 08:14:29.608173] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:18.572 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:18.572 cookie is 0 00:23:18.572 is_local: 1 00:23:18.572 our_own: 0 00:23:18.572 wide_area: 0 00:23:18.572 multicast: 1 00:23:18.572 cached: 1 00:23:19.503 [2024-12-07 08:14:30.512884] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:19.503 [2024-12-07 08:14:30.512915] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:19.503 [2024-12-07 08:14:30.512932] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:19.503 [2024-12-07 08:14:30.598982] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:23:19.503 [2024-12-07 08:14:30.612617] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:19.503 [2024-12-07 08:14:30.612635] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:19.503 [2024-12-07 08:14:30.612654] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:19.503 [2024-12-07 08:14:30.657185] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:19.503 [2024-12-07 08:14:30.657216] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:19.503 [2024-12-07 08:14:30.699363] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:23:19.503 [2024-12-07 08:14:30.754253] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:19.503 [2024-12-07 08:14:30.754488] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:22.786 08:14:33 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:23:22.786 08:14:33 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:22.786 08:14:33 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:22.786 08:14:33 -- host/mdns_discovery.sh@80 -- # sort 00:23:22.786 08:14:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.786 08:14:33 -- common/autotest_common.sh@10 -- # set +x 00:23:22.786 08:14:33 -- host/mdns_discovery.sh@80 -- # xargs 00:23:22.786 08:14:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.786 08:14:33 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:23:22.786 08:14:33 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:23:22.786 08:14:33 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:22.786 08:14:33 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:22.786 08:14:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.786 08:14:33 -- host/mdns_discovery.sh@76 -- # sort 00:23:22.786 08:14:33 -- common/autotest_common.sh@10 -- # set +x 00:23:22.786 08:14:33 -- host/mdns_discovery.sh@76 -- # xargs 00:23:22.786 08:14:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.786 08:14:33 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:22.786 08:14:33 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:23:22.786 08:14:33 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:22.786 08:14:33 -- host/mdns_discovery.sh@68 -- # sort 00:23:22.786 08:14:33 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:22.786 08:14:33 -- host/mdns_discovery.sh@68 -- # xargs 00:23:22.787 08:14:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.787 08:14:33 -- common/autotest_common.sh@10 -- # set +x 00:23:22.787 08:14:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@64 -- # sort 00:23:22.787 08:14:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.787 08:14:33 -- common/autotest_common.sh@10 -- # set +x 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@64 -- # xargs 00:23:22.787 08:14:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:22.787 08:14:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.787 08:14:33 -- common/autotest_common.sh@10 -- # set +x 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@72 -- # xargs 00:23:22.787 08:14:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:22.787 08:14:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.787 08:14:33 -- common/autotest_common.sh@10 -- # set +x 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@72 -- # xargs 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:22.787 08:14:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:22.787 08:14:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.787 08:14:33 -- common/autotest_common.sh@10 -- # set +x 00:23:22.787 08:14:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:22.787 08:14:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.787 08:14:33 -- common/autotest_common.sh@10 -- # set +x 00:23:22.787 08:14:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:23:22.787 08:14:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.787 08:14:33 -- common/autotest_common.sh@10 -- # set +x 00:23:22.787 08:14:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.787 08:14:33 -- host/mdns_discovery.sh@139 -- # sleep 1 00:23:23.722 08:14:34 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:23:23.722 08:14:34 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.722 08:14:34 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:23.722 08:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.722 08:14:34 -- host/mdns_discovery.sh@64 -- # sort 00:23:23.722 08:14:34 -- common/autotest_common.sh@10 -- # set +x 00:23:23.722 08:14:34 -- host/mdns_discovery.sh@64 -- # xargs 00:23:23.722 08:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.722 08:14:34 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:23.722 08:14:34 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:23:23.722 08:14:34 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:23.722 08:14:34 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:23.722 08:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.722 08:14:34 -- common/autotest_common.sh@10 -- # set +x 00:23:23.722 08:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.722 08:14:34 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:23.722 08:14:34 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:23.722 08:14:34 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:23:23.722 08:14:34 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:23.722 08:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.722 08:14:34 -- common/autotest_common.sh@10 -- # set +x 00:23:23.722 [2024-12-07 08:14:34.801510] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:23.722 [2024-12-07 08:14:34.802024] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:23.722 [2024-12-07 08:14:34.802260] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:23.722 [2024-12-07 08:14:34.802448] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:23.722 [2024-12-07 08:14:34.802593] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:23.722 08:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.722 08:14:34 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:23:23.722 08:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.722 08:14:34 -- common/autotest_common.sh@10 -- # set +x 00:23:23.722 [2024-12-07 08:14:34.813408] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:23.722 [2024-12-07 08:14:34.814024] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:23.722 [2024-12-07 08:14:34.814075] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:23.722 08:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.722 08:14:34 -- host/mdns_discovery.sh@149 -- # sleep 1 00:23:23.722 [2024-12-07 08:14:34.945113] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:23:23.722 [2024-12-07 08:14:34.945346] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:23:23.982 [2024-12-07 08:14:35.005532] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:23.982 [2024-12-07 08:14:35.005755] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:23.982 [2024-12-07 08:14:35.005768] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:23.982 [2024-12-07 08:14:35.005789] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:23.982 [2024-12-07 08:14:35.006389] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:23.982 [2024-12-07 08:14:35.006402] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:23.982 [2024-12-07 08:14:35.006407] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:23.982 [2024-12-07 08:14:35.006421] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:23.982 [2024-12-07 08:14:35.051234] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:23.982 [2024-12-07 08:14:35.051399] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:23.982 [2024-12-07 08:14:35.053246] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:23.982 [2024-12-07 08:14:35.053277] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:24.913 08:14:35 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:23:24.913 08:14:35 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:24.913 08:14:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.913 08:14:35 -- common/autotest_common.sh@10 -- # set +x 00:23:24.913 08:14:35 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:24.913 08:14:35 -- host/mdns_discovery.sh@68 -- # sort 00:23:24.913 08:14:35 -- host/mdns_discovery.sh@68 -- # xargs 00:23:24.913 08:14:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.913 08:14:35 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:24.913 08:14:35 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:23:24.914 08:14:35 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.914 08:14:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.914 08:14:35 -- common/autotest_common.sh@10 -- # set +x 00:23:24.914 08:14:35 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:24.914 08:14:35 -- host/mdns_discovery.sh@64 -- # xargs 00:23:24.914 08:14:35 -- host/mdns_discovery.sh@64 -- # sort 00:23:24.914 08:14:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.914 08:14:35 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:24.914 08:14:35 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:23:24.914 08:14:35 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:24.914 08:14:35 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:24.914 08:14:35 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:24.914 08:14:35 -- host/mdns_discovery.sh@72 -- # xargs 00:23:24.914 08:14:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.914 08:14:35 -- common/autotest_common.sh@10 -- # set +x 00:23:24.914 08:14:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.914 08:14:36 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:24.914 08:14:36 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:23:24.914 08:14:36 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:24.914 08:14:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.914 08:14:36 -- common/autotest_common.sh@10 -- # set +x 00:23:24.914 08:14:36 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:24.914 08:14:36 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:24.914 08:14:36 -- host/mdns_discovery.sh@72 -- # xargs 00:23:24.914 08:14:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.914 08:14:36 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:24.914 08:14:36 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:23:24.914 08:14:36 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:24.914 08:14:36 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:24.914 08:14:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.914 08:14:36 -- common/autotest_common.sh@10 -- # set +x 00:23:24.914 08:14:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.914 08:14:36 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:24.914 08:14:36 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:24.914 08:14:36 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:23:24.914 08:14:36 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:24.914 08:14:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.914 08:14:36 -- common/autotest_common.sh@10 -- # set +x 00:23:24.914 [2024-12-07 08:14:36.114857] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:24.914 [2024-12-07 08:14:36.114888] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:24.914 [2024-12-07 08:14:36.114917] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:24.914 [2024-12-07 08:14:36.114929] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:24.914 08:14:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.914 08:14:36 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:24.914 08:14:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.914 08:14:36 -- common/autotest_common.sh@10 -- # set +x 00:23:24.914 [2024-12-07 08:14:36.122863] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:24.914 [2024-12-07 08:14:36.122909] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:24.914 [2024-12-07 08:14:36.123690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.914 [2024-12-07 08:14:36.123720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.914 [2024-12-07 08:14:36.123748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.914 [2024-12-07 08:14:36.123757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.914 [2024-12-07 08:14:36.123765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.914 [2024-12-07 08:14:36.123773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.914 [2024-12-07 08:14:36.123782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.914 [2024-12-07 08:14:36.123790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.914 [2024-12-07 08:14:36.123799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510aa0 is same with the state(5) to be set 00:23:24.914 [2024-12-07 08:14:36.126764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.914 [2024-12-07 08:14:36.126795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.914 [2024-12-07 08:14:36.126823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.914 [2024-12-07 08:14:36.126832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.914 [2024-12-07 08:14:36.126842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.914 [2024-12-07 08:14:36.126850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.914 [2024-12-07 08:14:36.126859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.914 [2024-12-07 08:14:36.126867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c 08:14:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.914 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.914 [2024-12-07 08:14:36.126878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fb760 is same with the state(5) to be set 00:23:24.914 08:14:36 -- host/mdns_discovery.sh@162 -- # sleep 1 00:23:24.914 [2024-12-07 08:14:36.133654] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2510aa0 (9): Bad file descriptor 00:23:24.914 [2024-12-07 08:14:36.136716] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fb760 (9): Bad file descriptor 00:23:24.914 [2024-12-07 08:14:36.143670] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:24.914 [2024-12-07 08:14:36.143810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.914 [2024-12-07 08:14:36.143872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.914 [2024-12-07 08:14:36.143904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2510aa0 with addr=10.0.0.2, port=4420 00:23:24.914 [2024-12-07 08:14:36.143915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510aa0 is same with the state(5) to be set 00:23:24.914 [2024-12-07 08:14:36.143931] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2510aa0 (9): Bad file descriptor 00:23:24.914 [2024-12-07 08:14:36.143962] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:24.914 [2024-12-07 08:14:36.143973] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:24.914 [2024-12-07 08:14:36.143983] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:24.914 [2024-12-07 08:14:36.143998] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.914 [2024-12-07 08:14:36.146733] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:24.914 [2024-12-07 08:14:36.146841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.914 [2024-12-07 08:14:36.146884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.914 [2024-12-07 08:14:36.146899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fb760 with addr=10.0.0.3, port=4420 00:23:24.914 [2024-12-07 08:14:36.146908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fb760 is same with the state(5) to be set 00:23:24.914 [2024-12-07 08:14:36.146922] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fb760 (9): Bad file descriptor 00:23:24.914 [2024-12-07 08:14:36.146934] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:24.914 [2024-12-07 08:14:36.146941] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:24.914 [2024-12-07 08:14:36.146965] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:24.914 [2024-12-07 08:14:36.147011] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.914 [2024-12-07 08:14:36.153754] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:24.914 [2024-12-07 08:14:36.153847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.914 [2024-12-07 08:14:36.153907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.914 [2024-12-07 08:14:36.153939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2510aa0 with addr=10.0.0.2, port=4420 00:23:24.914 [2024-12-07 08:14:36.153949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510aa0 is same with the state(5) to be set 00:23:24.914 [2024-12-07 08:14:36.153964] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2510aa0 (9): Bad file descriptor 00:23:24.914 [2024-12-07 08:14:36.154008] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:24.914 [2024-12-07 08:14:36.154018] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:24.914 [2024-12-07 08:14:36.154027] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:24.914 [2024-12-07 08:14:36.154056] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.914 [2024-12-07 08:14:36.156795] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:24.914 [2024-12-07 08:14:36.156897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.914 [2024-12-07 08:14:36.156957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.915 [2024-12-07 08:14:36.156988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fb760 with addr=10.0.0.3, port=4420 00:23:24.915 [2024-12-07 08:14:36.156997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fb760 is same with the state(5) to be set 00:23:24.915 [2024-12-07 08:14:36.157012] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fb760 (9): Bad file descriptor 00:23:24.915 [2024-12-07 08:14:36.157043] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:24.915 [2024-12-07 08:14:36.157053] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:24.915 [2024-12-07 08:14:36.157061] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:24.915 [2024-12-07 08:14:36.157074] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.915 [2024-12-07 08:14:36.163801] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:24.915 [2024-12-07 08:14:36.163904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.915 [2024-12-07 08:14:36.163945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.915 [2024-12-07 08:14:36.163960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2510aa0 with addr=10.0.0.2, port=4420 00:23:24.915 [2024-12-07 08:14:36.163969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510aa0 is same with the state(5) to be set 00:23:24.915 [2024-12-07 08:14:36.163983] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2510aa0 (9): Bad file descriptor 00:23:24.915 [2024-12-07 08:14:36.163994] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:24.915 [2024-12-07 08:14:36.164001] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:24.915 [2024-12-07 08:14:36.164009] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:24.915 [2024-12-07 08:14:36.164037] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.915 [2024-12-07 08:14:36.166851] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:24.915 [2024-12-07 08:14:36.166939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.915 [2024-12-07 08:14:36.166980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.915 [2024-12-07 08:14:36.167010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fb760 with addr=10.0.0.3, port=4420 00:23:24.915 [2024-12-07 08:14:36.167019] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fb760 is same with the state(5) to be set 00:23:24.915 [2024-12-07 08:14:36.167049] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fb760 (9): Bad file descriptor 00:23:24.915 [2024-12-07 08:14:36.167078] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:24.915 [2024-12-07 08:14:36.167087] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:24.915 [2024-12-07 08:14:36.167096] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:24.915 [2024-12-07 08:14:36.167109] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.915 [2024-12-07 08:14:36.173863] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:24.915 [2024-12-07 08:14:36.173963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.915 [2024-12-07 08:14:36.174039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.915 [2024-12-07 08:14:36.174070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2510aa0 with addr=10.0.0.2, port=4420 00:23:24.915 [2024-12-07 08:14:36.174095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510aa0 is same with the state(5) to be set 00:23:24.915 [2024-12-07 08:14:36.174111] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2510aa0 (9): Bad file descriptor 00:23:24.915 [2024-12-07 08:14:36.174140] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:24.915 [2024-12-07 08:14:36.174149] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:24.915 [2024-12-07 08:14:36.174158] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:24.915 [2024-12-07 08:14:36.174171] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.915 [2024-12-07 08:14:36.176912] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:24.915 [2024-12-07 08:14:36.177001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.915 [2024-12-07 08:14:36.177044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.915 [2024-12-07 08:14:36.177075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fb760 with addr=10.0.0.3, port=4420 00:23:24.915 [2024-12-07 08:14:36.177085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fb760 is same with the state(5) to be set 00:23:24.915 [2024-12-07 08:14:36.177099] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fb760 (9): Bad file descriptor 00:23:24.915 [2024-12-07 08:14:36.177111] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:24.915 [2024-12-07 08:14:36.177119] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:24.915 [2024-12-07 08:14:36.177127] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:24.915 [2024-12-07 08:14:36.177158] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.915 [2024-12-07 08:14:36.183929] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:24.915 [2024-12-07 08:14:36.184034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.915 [2024-12-07 08:14:36.184078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.915 [2024-12-07 08:14:36.184094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2510aa0 with addr=10.0.0.2, port=4420 00:23:24.915 [2024-12-07 08:14:36.184103] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510aa0 is same with the state(5) to be set 00:23:24.915 [2024-12-07 08:14:36.184118] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2510aa0 (9): Bad file descriptor 00:23:24.915 [2024-12-07 08:14:36.184130] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:24.915 [2024-12-07 08:14:36.184138] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:24.915 [2024-12-07 08:14:36.184146] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:24.915 [2024-12-07 08:14:36.184173] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.915 [2024-12-07 08:14:36.186958] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:24.915 [2024-12-07 08:14:36.187065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.915 [2024-12-07 08:14:36.187108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.915 [2024-12-07 08:14:36.187123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fb760 with addr=10.0.0.3, port=4420 00:23:24.915 [2024-12-07 08:14:36.187133] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fb760 is same with the state(5) to be set 00:23:24.915 [2024-12-07 08:14:36.187146] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fb760 (9): Bad file descriptor 00:23:24.915 [2024-12-07 08:14:36.187189] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:24.915 [2024-12-07 08:14:36.187199] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:24.915 [2024-12-07 08:14:36.187207] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:24.915 [2024-12-07 08:14:36.187219] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.173 [2024-12-07 08:14:36.193991] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.173 [2024-12-07 08:14:36.194145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.173 [2024-12-07 08:14:36.194188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.173 [2024-12-07 08:14:36.194203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2510aa0 with addr=10.0.0.2, port=4420 00:23:25.173 [2024-12-07 08:14:36.194212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510aa0 is same with the state(5) to be set 00:23:25.173 [2024-12-07 08:14:36.194237] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2510aa0 (9): Bad file descriptor 00:23:25.173 [2024-12-07 08:14:36.194267] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.173 [2024-12-07 08:14:36.194276] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.173 [2024-12-07 08:14:36.194284] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.173 [2024-12-07 08:14:36.194297] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.173 [2024-12-07 08:14:36.197020] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.174 [2024-12-07 08:14:36.197139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.174 [2024-12-07 08:14:36.197183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.174 [2024-12-07 08:14:36.197198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fb760 with addr=10.0.0.3, port=4420 00:23:25.174 [2024-12-07 08:14:36.197208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fb760 is same with the state(5) to be set 00:23:25.174 [2024-12-07 08:14:36.197238] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fb760 (9): Bad file descriptor 00:23:25.174 [2024-12-07 08:14:36.197290] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.174 [2024-12-07 08:14:36.197302] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.174 [2024-12-07 08:14:36.197311] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.174 [2024-12-07 08:14:36.197325] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.174 [2024-12-07 08:14:36.204086] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.174 [2024-12-07 08:14:36.204190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.174 [2024-12-07 08:14:36.204263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.174 [2024-12-07 08:14:36.204280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2510aa0 with addr=10.0.0.2, port=4420 00:23:25.174 [2024-12-07 08:14:36.204290] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510aa0 is same with the state(5) to be set 00:23:25.174 [2024-12-07 08:14:36.204304] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2510aa0 (9): Bad file descriptor 00:23:25.174 [2024-12-07 08:14:36.204333] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.174 [2024-12-07 08:14:36.204343] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.174 [2024-12-07 08:14:36.204352] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.174 [2024-12-07 08:14:36.204364] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.174 [2024-12-07 08:14:36.207079] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.174 [2024-12-07 08:14:36.207184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.174 [2024-12-07 08:14:36.207239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.174 [2024-12-07 08:14:36.207256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fb760 with addr=10.0.0.3, port=4420 00:23:25.174 [2024-12-07 08:14:36.207265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fb760 is same with the state(5) to be set 00:23:25.174 [2024-12-07 08:14:36.207280] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fb760 (9): Bad file descriptor 00:23:25.174 [2024-12-07 08:14:36.207292] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.174 [2024-12-07 08:14:36.207299] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.174 [2024-12-07 08:14:36.207307] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.174 [2024-12-07 08:14:36.207320] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.174 [2024-12-07 08:14:36.214149] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.174 [2024-12-07 08:14:36.214271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.174 [2024-12-07 08:14:36.214317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.174 [2024-12-07 08:14:36.214333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2510aa0 with addr=10.0.0.2, port=4420 00:23:25.174 [2024-12-07 08:14:36.214342] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510aa0 is same with the state(5) to be set 00:23:25.174 [2024-12-07 08:14:36.214357] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2510aa0 (9): Bad file descriptor 00:23:25.174 [2024-12-07 08:14:36.214396] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.174 [2024-12-07 08:14:36.214408] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.174 [2024-12-07 08:14:36.214432] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.174 [2024-12-07 08:14:36.214446] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.174 [2024-12-07 08:14:36.217138] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.174 [2024-12-07 08:14:36.217250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.174 [2024-12-07 08:14:36.217293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.174 [2024-12-07 08:14:36.217308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fb760 with addr=10.0.0.3, port=4420 00:23:25.174 [2024-12-07 08:14:36.217317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fb760 is same with the state(5) to be set 00:23:25.174 [2024-12-07 08:14:36.217331] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fb760 (9): Bad file descriptor 00:23:25.174 [2024-12-07 08:14:36.217351] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.174 [2024-12-07 08:14:36.217360] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.174 [2024-12-07 08:14:36.217368] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.174 [2024-12-07 08:14:36.217381] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.174 [2024-12-07 08:14:36.224240] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.174 [2024-12-07 08:14:36.224346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.174 [2024-12-07 08:14:36.224390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.174 [2024-12-07 08:14:36.224406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2510aa0 with addr=10.0.0.2, port=4420 00:23:25.174 [2024-12-07 08:14:36.224416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510aa0 is same with the state(5) to be set 00:23:25.174 [2024-12-07 08:14:36.224430] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2510aa0 (9): Bad file descriptor 00:23:25.174 [2024-12-07 08:14:36.224458] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.174 [2024-12-07 08:14:36.224468] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.174 [2024-12-07 08:14:36.224476] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.174 [2024-12-07 08:14:36.224489] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.174 [2024-12-07 08:14:36.227204] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.174 [2024-12-07 08:14:36.227307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.174 [2024-12-07 08:14:36.227349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.174 [2024-12-07 08:14:36.227364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fb760 with addr=10.0.0.3, port=4420 00:23:25.174 [2024-12-07 08:14:36.227373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fb760 is same with the state(5) to be set 00:23:25.174 [2024-12-07 08:14:36.227386] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fb760 (9): Bad file descriptor 00:23:25.174 [2024-12-07 08:14:36.227398] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.174 [2024-12-07 08:14:36.227405] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.174 [2024-12-07 08:14:36.227413] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.174 [2024-12-07 08:14:36.227426] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.174 [2024-12-07 08:14:36.234317] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.174 [2024-12-07 08:14:36.234419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.174 [2024-12-07 08:14:36.234462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.174 [2024-12-07 08:14:36.234477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2510aa0 with addr=10.0.0.2, port=4420 00:23:25.174 [2024-12-07 08:14:36.234486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510aa0 is same with the state(5) to be set 00:23:25.174 [2024-12-07 08:14:36.234500] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2510aa0 (9): Bad file descriptor 00:23:25.174 [2024-12-07 08:14:36.234525] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.174 [2024-12-07 08:14:36.234534] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.174 [2024-12-07 08:14:36.234542] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.174 [2024-12-07 08:14:36.234554] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.174 [2024-12-07 08:14:36.237279] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.174 [2024-12-07 08:14:36.237379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.174 [2024-12-07 08:14:36.237422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.174 [2024-12-07 08:14:36.237437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fb760 with addr=10.0.0.3, port=4420 00:23:25.174 [2024-12-07 08:14:36.237445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fb760 is same with the state(5) to be set 00:23:25.174 [2024-12-07 08:14:36.237467] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fb760 (9): Bad file descriptor 00:23:25.174 [2024-12-07 08:14:36.237480] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.174 [2024-12-07 08:14:36.237488] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.174 [2024-12-07 08:14:36.237496] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.174 [2024-12-07 08:14:36.237508] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.174 [2024-12-07 08:14:36.244381] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.174 [2024-12-07 08:14:36.244473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.174 [2024-12-07 08:14:36.244519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.174 [2024-12-07 08:14:36.244540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2510aa0 with addr=10.0.0.2, port=4420 00:23:25.175 [2024-12-07 08:14:36.244565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510aa0 is same with the state(5) to be set 00:23:25.175 [2024-12-07 08:14:36.244595] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2510aa0 (9): Bad file descriptor 00:23:25.175 [2024-12-07 08:14:36.244624] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.175 [2024-12-07 08:14:36.244634] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.175 [2024-12-07 08:14:36.244643] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.175 [2024-12-07 08:14:36.244657] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.175 [2024-12-07 08:14:36.247340] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:25.175 [2024-12-07 08:14:36.247444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.175 [2024-12-07 08:14:36.247488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.175 [2024-12-07 08:14:36.247504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24fb760 with addr=10.0.0.3, port=4420 00:23:25.175 [2024-12-07 08:14:36.247513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24fb760 is same with the state(5) to be set 00:23:25.175 [2024-12-07 08:14:36.247528] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24fb760 (9): Bad file descriptor 00:23:25.175 [2024-12-07 08:14:36.247540] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:25.175 [2024-12-07 08:14:36.247548] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:25.175 [2024-12-07 08:14:36.247556] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:25.175 [2024-12-07 08:14:36.247569] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.175 [2024-12-07 08:14:36.253617] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:23:25.175 [2024-12-07 08:14:36.253660] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:25.175 [2024-12-07 08:14:36.253686] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:25.175 [2024-12-07 08:14:36.254428] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:25.175 [2024-12-07 08:14:36.254524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.175 [2024-12-07 08:14:36.254569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:25.175 [2024-12-07 08:14:36.254584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2510aa0 with addr=10.0.0.2, port=4420 00:23:25.175 [2024-12-07 08:14:36.254594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2510aa0 is same with the state(5) to be set 00:23:25.175 [2024-12-07 08:14:36.254639] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2510aa0 (9): Bad file descriptor 00:23:25.175 [2024-12-07 08:14:36.254688] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:25.175 [2024-12-07 08:14:36.254704] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:25.175 [2024-12-07 08:14:36.254720] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:25.175 [2024-12-07 08:14:36.254769] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:25.175 [2024-12-07 08:14:36.254782] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:25.175 [2024-12-07 08:14:36.254791] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:25.175 [2024-12-07 08:14:36.254831] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:25.175 [2024-12-07 08:14:36.339757] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:25.175 [2024-12-07 08:14:36.340747] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:26.108 08:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.108 08:14:37 -- common/autotest_common.sh@10 -- # set +x 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@68 -- # sort 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@68 -- # xargs 00:23:26.108 08:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.108 08:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@64 -- # sort 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@64 -- # xargs 00:23:26.108 08:14:37 -- common/autotest_common.sh@10 -- # set +x 00:23:26.108 08:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:26.108 08:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.108 08:14:37 -- common/autotest_common.sh@10 -- # set +x 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@72 -- # xargs 00:23:26.108 08:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:26.108 08:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.108 08:14:37 -- common/autotest_common.sh@10 -- # set +x 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@72 -- # xargs 00:23:26.108 08:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:26.108 08:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.108 08:14:37 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:26.108 08:14:37 -- common/autotest_common.sh@10 -- # set +x 00:23:26.108 08:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.367 08:14:37 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:26.367 08:14:37 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:26.367 08:14:37 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:23:26.367 08:14:37 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:26.367 08:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.367 08:14:37 -- common/autotest_common.sh@10 -- # set +x 00:23:26.367 08:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.367 08:14:37 -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:26.367 [2024-12-07 08:14:37.508249] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:27.302 08:14:38 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:27.302 08:14:38 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:27.302 08:14:38 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:27.302 08:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.302 08:14:38 -- common/autotest_common.sh@10 -- # set +x 00:23:27.302 08:14:38 -- host/mdns_discovery.sh@80 -- # sort 00:23:27.302 08:14:38 -- host/mdns_discovery.sh@80 -- # xargs 00:23:27.302 08:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.302 08:14:38 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:27.302 08:14:38 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:27.302 08:14:38 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:27.302 08:14:38 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:27.302 08:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.302 08:14:38 -- common/autotest_common.sh@10 -- # set +x 00:23:27.302 08:14:38 -- host/mdns_discovery.sh@68 -- # sort 00:23:27.302 08:14:38 -- host/mdns_discovery.sh@68 -- # xargs 00:23:27.302 08:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.302 08:14:38 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:27.302 08:14:38 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:27.302 08:14:38 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.302 08:14:38 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:27.302 08:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.302 08:14:38 -- common/autotest_common.sh@10 -- # set +x 00:23:27.302 08:14:38 -- host/mdns_discovery.sh@64 -- # sort 00:23:27.302 08:14:38 -- host/mdns_discovery.sh@64 -- # xargs 00:23:27.302 08:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.302 08:14:38 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:27.302 08:14:38 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:27.561 08:14:38 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:27.561 08:14:38 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:27.561 08:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.561 08:14:38 -- common/autotest_common.sh@10 -- # set +x 00:23:27.561 08:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.561 08:14:38 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:27.561 08:14:38 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:27.561 08:14:38 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:27.561 08:14:38 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:27.561 08:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.561 08:14:38 -- common/autotest_common.sh@10 -- # set +x 00:23:27.561 08:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.561 08:14:38 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:27.561 08:14:38 -- common/autotest_common.sh@650 -- # local es=0 00:23:27.561 08:14:38 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:27.561 08:14:38 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:27.561 08:14:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:27.561 08:14:38 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:27.561 08:14:38 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:27.561 08:14:38 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:27.561 08:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.561 08:14:38 -- common/autotest_common.sh@10 -- # set +x 00:23:27.561 [2024-12-07 08:14:38.642201] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:27.561 2024/12/07 08:14:38 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:27.561 request: 00:23:27.561 { 00:23:27.561 "method": "bdev_nvme_start_mdns_discovery", 00:23:27.561 "params": { 00:23:27.561 "name": "mdns", 00:23:27.561 "svcname": "_nvme-disc._http", 00:23:27.561 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:27.561 } 00:23:27.561 } 00:23:27.561 Got JSON-RPC error response 00:23:27.561 GoRPCClient: error on JSON-RPC call 00:23:27.561 08:14:38 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:27.561 08:14:38 -- common/autotest_common.sh@653 -- # es=1 00:23:27.561 08:14:38 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:27.561 08:14:38 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:27.561 08:14:38 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:27.561 08:14:38 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:27.821 [2024-12-07 08:14:39.030843] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:28.079 [2024-12-07 08:14:39.130838] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:28.079 [2024-12-07 08:14:39.230846] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:28.079 [2024-12-07 08:14:39.230867] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:28.079 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:28.079 cookie is 0 00:23:28.079 is_local: 1 00:23:28.079 our_own: 0 00:23:28.079 wide_area: 0 00:23:28.079 multicast: 1 00:23:28.079 cached: 1 00:23:28.079 [2024-12-07 08:14:39.330843] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:28.079 [2024-12-07 08:14:39.330865] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:28.079 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:28.079 cookie is 0 00:23:28.079 is_local: 1 00:23:28.079 our_own: 0 00:23:28.079 wide_area: 0 00:23:28.079 multicast: 1 00:23:28.079 cached: 1 00:23:29.013 [2024-12-07 08:14:40.234397] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:29.013 [2024-12-07 08:14:40.234428] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:29.013 [2024-12-07 08:14:40.234461] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:29.271 [2024-12-07 08:14:40.320506] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:29.271 [2024-12-07 08:14:40.334221] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:29.271 [2024-12-07 08:14:40.334266] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:29.271 [2024-12-07 08:14:40.334281] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:29.271 [2024-12-07 08:14:40.381196] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:29.271 [2024-12-07 08:14:40.381250] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:29.271 [2024-12-07 08:14:40.420360] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:29.271 [2024-12-07 08:14:40.479073] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:29.271 [2024-12-07 08:14:40.479100] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:32.582 08:14:43 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:32.582 08:14:43 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:32.582 08:14:43 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:32.582 08:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.582 08:14:43 -- host/mdns_discovery.sh@80 -- # sort 00:23:32.582 08:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:32.582 08:14:43 -- host/mdns_discovery.sh@80 -- # xargs 00:23:32.582 08:14:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.582 08:14:43 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:32.582 08:14:43 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:32.582 08:14:43 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:32.582 08:14:43 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:32.582 08:14:43 -- host/mdns_discovery.sh@76 -- # sort 00:23:32.582 08:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.582 08:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:32.582 08:14:43 -- host/mdns_discovery.sh@76 -- # xargs 00:23:32.582 08:14:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.582 08:14:43 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:32.582 08:14:43 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:32.582 08:14:43 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.582 08:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.582 08:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:32.582 08:14:43 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:32.582 08:14:43 -- host/mdns_discovery.sh@64 -- # sort 00:23:32.582 08:14:43 -- host/mdns_discovery.sh@64 -- # xargs 00:23:32.582 08:14:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.582 08:14:43 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:32.582 08:14:43 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:32.582 08:14:43 -- common/autotest_common.sh@650 -- # local es=0 00:23:32.582 08:14:43 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:32.582 08:14:43 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:32.582 08:14:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:32.583 08:14:43 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:32.583 08:14:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:32.583 08:14:43 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:32.583 08:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.583 08:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:32.583 [2024-12-07 08:14:43.820271] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:32.583 2024/12/07 08:14:43 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:32.583 request: 00:23:32.583 { 00:23:32.583 "method": "bdev_nvme_start_mdns_discovery", 00:23:32.583 "params": { 00:23:32.583 "name": "cdc", 00:23:32.583 "svcname": "_nvme-disc._tcp", 00:23:32.583 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:32.583 } 00:23:32.583 } 00:23:32.583 Got JSON-RPC error response 00:23:32.583 GoRPCClient: error on JSON-RPC call 00:23:32.583 08:14:43 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:32.583 08:14:43 -- common/autotest_common.sh@653 -- # es=1 00:23:32.583 08:14:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:32.583 08:14:43 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:32.583 08:14:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:32.583 08:14:43 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:32.583 08:14:43 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:32.583 08:14:43 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:32.583 08:14:43 -- host/mdns_discovery.sh@76 -- # sort 00:23:32.583 08:14:43 -- host/mdns_discovery.sh@76 -- # xargs 00:23:32.583 08:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.583 08:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:32.869 08:14:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.869 08:14:43 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:32.869 08:14:43 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:32.869 08:14:43 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.869 08:14:43 -- host/mdns_discovery.sh@64 -- # sort 00:23:32.869 08:14:43 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:32.869 08:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.869 08:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:32.869 08:14:43 -- host/mdns_discovery.sh@64 -- # xargs 00:23:32.869 08:14:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.869 08:14:43 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:32.869 08:14:43 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:32.869 08:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.869 08:14:43 -- common/autotest_common.sh@10 -- # set +x 00:23:32.869 08:14:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.869 08:14:43 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:32.869 08:14:43 -- host/mdns_discovery.sh@197 -- # kill 98413 00:23:32.869 08:14:43 -- host/mdns_discovery.sh@200 -- # wait 98413 00:23:32.869 [2024-12-07 08:14:44.042748] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:32.869 08:14:44 -- host/mdns_discovery.sh@201 -- # kill 98494 00:23:32.869 Got SIGTERM, quitting. 00:23:32.869 08:14:44 -- host/mdns_discovery.sh@202 -- # kill 98443 00:23:32.869 Got SIGTERM, quitting. 00:23:32.869 08:14:44 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:23:32.869 08:14:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:32.869 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:32.869 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:32.869 08:14:44 -- nvmf/common.sh@116 -- # sync 00:23:32.869 avahi-daemon 0.8 exiting. 00:23:33.129 08:14:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:33.129 08:14:44 -- nvmf/common.sh@119 -- # set +e 00:23:33.129 08:14:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:33.129 08:14:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:33.129 rmmod nvme_tcp 00:23:33.129 rmmod nvme_fabrics 00:23:33.129 rmmod nvme_keyring 00:23:33.129 08:14:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:33.129 08:14:44 -- nvmf/common.sh@123 -- # set -e 00:23:33.129 08:14:44 -- nvmf/common.sh@124 -- # return 0 00:23:33.129 08:14:44 -- nvmf/common.sh@477 -- # '[' -n 98371 ']' 00:23:33.129 08:14:44 -- nvmf/common.sh@478 -- # killprocess 98371 00:23:33.129 08:14:44 -- common/autotest_common.sh@936 -- # '[' -z 98371 ']' 00:23:33.129 08:14:44 -- common/autotest_common.sh@940 -- # kill -0 98371 00:23:33.129 08:14:44 -- common/autotest_common.sh@941 -- # uname 00:23:33.129 08:14:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:33.129 08:14:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98371 00:23:33.129 08:14:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:33.129 08:14:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:33.129 killing process with pid 98371 00:23:33.129 08:14:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98371' 00:23:33.129 08:14:44 -- common/autotest_common.sh@955 -- # kill 98371 00:23:33.129 08:14:44 -- common/autotest_common.sh@960 -- # wait 98371 00:23:33.387 08:14:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:33.387 08:14:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:33.387 08:14:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:33.387 08:14:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:33.387 08:14:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:33.387 08:14:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.387 08:14:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.387 08:14:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.387 08:14:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:33.387 00:23:33.387 real 0m19.857s 00:23:33.387 user 0m39.473s 00:23:33.387 sys 0m1.875s 00:23:33.387 08:14:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:33.387 ************************************ 00:23:33.388 END TEST nvmf_mdns_discovery 00:23:33.388 08:14:44 -- common/autotest_common.sh@10 -- # set +x 00:23:33.388 ************************************ 00:23:33.388 08:14:44 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:23:33.388 08:14:44 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:33.388 08:14:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:33.388 08:14:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:33.388 08:14:44 -- common/autotest_common.sh@10 -- # set +x 00:23:33.388 ************************************ 00:23:33.388 START TEST nvmf_multipath 00:23:33.388 ************************************ 00:23:33.388 08:14:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:33.388 * Looking for test storage... 00:23:33.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:33.388 08:14:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:33.388 08:14:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:33.388 08:14:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:33.647 08:14:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:33.647 08:14:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:33.647 08:14:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:33.647 08:14:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:33.647 08:14:44 -- scripts/common.sh@335 -- # IFS=.-: 00:23:33.647 08:14:44 -- scripts/common.sh@335 -- # read -ra ver1 00:23:33.647 08:14:44 -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.647 08:14:44 -- scripts/common.sh@336 -- # read -ra ver2 00:23:33.647 08:14:44 -- scripts/common.sh@337 -- # local 'op=<' 00:23:33.647 08:14:44 -- scripts/common.sh@339 -- # ver1_l=2 00:23:33.647 08:14:44 -- scripts/common.sh@340 -- # ver2_l=1 00:23:33.647 08:14:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:33.647 08:14:44 -- scripts/common.sh@343 -- # case "$op" in 00:23:33.647 08:14:44 -- scripts/common.sh@344 -- # : 1 00:23:33.647 08:14:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:33.647 08:14:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.647 08:14:44 -- scripts/common.sh@364 -- # decimal 1 00:23:33.647 08:14:44 -- scripts/common.sh@352 -- # local d=1 00:23:33.647 08:14:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.647 08:14:44 -- scripts/common.sh@354 -- # echo 1 00:23:33.647 08:14:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:33.647 08:14:44 -- scripts/common.sh@365 -- # decimal 2 00:23:33.647 08:14:44 -- scripts/common.sh@352 -- # local d=2 00:23:33.647 08:14:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.647 08:14:44 -- scripts/common.sh@354 -- # echo 2 00:23:33.647 08:14:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:33.647 08:14:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:33.647 08:14:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:33.647 08:14:44 -- scripts/common.sh@367 -- # return 0 00:23:33.647 08:14:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.647 08:14:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:33.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.647 --rc genhtml_branch_coverage=1 00:23:33.647 --rc genhtml_function_coverage=1 00:23:33.647 --rc genhtml_legend=1 00:23:33.647 --rc geninfo_all_blocks=1 00:23:33.647 --rc geninfo_unexecuted_blocks=1 00:23:33.647 00:23:33.647 ' 00:23:33.647 08:14:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:33.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.647 --rc genhtml_branch_coverage=1 00:23:33.647 --rc genhtml_function_coverage=1 00:23:33.647 --rc genhtml_legend=1 00:23:33.647 --rc geninfo_all_blocks=1 00:23:33.647 --rc geninfo_unexecuted_blocks=1 00:23:33.647 00:23:33.647 ' 00:23:33.647 08:14:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:33.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.647 --rc genhtml_branch_coverage=1 00:23:33.647 --rc genhtml_function_coverage=1 00:23:33.647 --rc genhtml_legend=1 00:23:33.647 --rc geninfo_all_blocks=1 00:23:33.647 --rc geninfo_unexecuted_blocks=1 00:23:33.647 00:23:33.647 ' 00:23:33.647 08:14:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:33.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.647 --rc genhtml_branch_coverage=1 00:23:33.647 --rc genhtml_function_coverage=1 00:23:33.647 --rc genhtml_legend=1 00:23:33.647 --rc geninfo_all_blocks=1 00:23:33.647 --rc geninfo_unexecuted_blocks=1 00:23:33.647 00:23:33.647 ' 00:23:33.647 08:14:44 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:33.647 08:14:44 -- nvmf/common.sh@7 -- # uname -s 00:23:33.647 08:14:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.647 08:14:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.647 08:14:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.647 08:14:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.647 08:14:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.647 08:14:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.647 08:14:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.647 08:14:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.647 08:14:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.647 08:14:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.647 08:14:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:23:33.647 08:14:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:23:33.647 08:14:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.647 08:14:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.647 08:14:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:33.647 08:14:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:33.647 08:14:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.647 08:14:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.647 08:14:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.647 08:14:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.647 08:14:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.647 08:14:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.647 08:14:44 -- paths/export.sh@5 -- # export PATH 00:23:33.647 08:14:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.647 08:14:44 -- nvmf/common.sh@46 -- # : 0 00:23:33.647 08:14:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:33.647 08:14:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:33.647 08:14:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:33.647 08:14:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.647 08:14:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.647 08:14:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:33.647 08:14:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:33.647 08:14:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:33.647 08:14:44 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:33.647 08:14:44 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:33.647 08:14:44 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:33.647 08:14:44 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:33.647 08:14:44 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:33.647 08:14:44 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:33.647 08:14:44 -- host/multipath.sh@30 -- # nvmftestinit 00:23:33.647 08:14:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:33.647 08:14:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.647 08:14:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:33.647 08:14:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:33.647 08:14:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:33.647 08:14:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.647 08:14:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.647 08:14:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.647 08:14:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:33.647 08:14:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:33.647 08:14:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:33.647 08:14:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:33.647 08:14:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:33.647 08:14:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:33.647 08:14:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.647 08:14:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.647 08:14:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:33.647 08:14:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:33.647 08:14:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:33.647 08:14:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:33.647 08:14:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:33.647 08:14:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.647 08:14:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:33.648 08:14:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:33.648 08:14:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:33.648 08:14:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:33.648 08:14:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:33.648 08:14:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:33.648 Cannot find device "nvmf_tgt_br" 00:23:33.648 08:14:44 -- nvmf/common.sh@154 -- # true 00:23:33.648 08:14:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:33.648 Cannot find device "nvmf_tgt_br2" 00:23:33.648 08:14:44 -- nvmf/common.sh@155 -- # true 00:23:33.648 08:14:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:33.648 08:14:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:33.648 Cannot find device "nvmf_tgt_br" 00:23:33.648 08:14:44 -- nvmf/common.sh@157 -- # true 00:23:33.648 08:14:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:33.648 Cannot find device "nvmf_tgt_br2" 00:23:33.648 08:14:44 -- nvmf/common.sh@158 -- # true 00:23:33.648 08:14:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:33.648 08:14:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:33.648 08:14:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:33.648 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:33.648 08:14:44 -- nvmf/common.sh@161 -- # true 00:23:33.648 08:14:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:33.648 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:33.648 08:14:44 -- nvmf/common.sh@162 -- # true 00:23:33.648 08:14:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:33.648 08:14:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:33.648 08:14:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:33.648 08:14:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:33.907 08:14:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:33.907 08:14:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:33.907 08:14:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:33.907 08:14:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:33.907 08:14:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:33.907 08:14:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:33.907 08:14:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:33.907 08:14:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:33.907 08:14:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:33.907 08:14:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:33.907 08:14:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:33.907 08:14:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:33.907 08:14:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:33.907 08:14:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:33.907 08:14:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:33.907 08:14:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:33.907 08:14:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:33.907 08:14:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:33.907 08:14:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:33.907 08:14:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:33.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:33.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:23:33.907 00:23:33.907 --- 10.0.0.2 ping statistics --- 00:23:33.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.907 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:23:33.907 08:14:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:33.907 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:33.907 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:23:33.907 00:23:33.907 --- 10.0.0.3 ping statistics --- 00:23:33.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.907 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:23:33.907 08:14:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:33.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:33.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:23:33.907 00:23:33.907 --- 10.0.0.1 ping statistics --- 00:23:33.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:33.907 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:23:33.907 08:14:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:33.907 08:14:45 -- nvmf/common.sh@421 -- # return 0 00:23:33.907 08:14:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:33.907 08:14:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:33.907 08:14:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:33.907 08:14:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:33.907 08:14:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:33.907 08:14:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:33.907 08:14:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:33.907 08:14:45 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:33.907 08:14:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:33.907 08:14:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:33.907 08:14:45 -- common/autotest_common.sh@10 -- # set +x 00:23:33.907 08:14:45 -- nvmf/common.sh@469 -- # nvmfpid=99014 00:23:33.907 08:14:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:33.907 08:14:45 -- nvmf/common.sh@470 -- # waitforlisten 99014 00:23:33.907 08:14:45 -- common/autotest_common.sh@829 -- # '[' -z 99014 ']' 00:23:33.907 08:14:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.907 08:14:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:33.907 08:14:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.907 08:14:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:33.907 08:14:45 -- common/autotest_common.sh@10 -- # set +x 00:23:33.907 [2024-12-07 08:14:45.147744] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:33.907 [2024-12-07 08:14:45.147850] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.165 [2024-12-07 08:14:45.283552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:34.165 [2024-12-07 08:14:45.358056] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:34.165 [2024-12-07 08:14:45.358255] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.165 [2024-12-07 08:14:45.358270] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.165 [2024-12-07 08:14:45.358279] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.165 [2024-12-07 08:14:45.358739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.165 [2024-12-07 08:14:45.358789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.100 08:14:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.100 08:14:46 -- common/autotest_common.sh@862 -- # return 0 00:23:35.100 08:14:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:35.100 08:14:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:35.100 08:14:46 -- common/autotest_common.sh@10 -- # set +x 00:23:35.100 08:14:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.100 08:14:46 -- host/multipath.sh@33 -- # nvmfapp_pid=99014 00:23:35.100 08:14:46 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:35.359 [2024-12-07 08:14:46.436054] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.359 08:14:46 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:35.617 Malloc0 00:23:35.617 08:14:46 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:35.876 08:14:46 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:36.134 08:14:47 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:36.393 [2024-12-07 08:14:47.492962] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.393 08:14:47 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:36.652 [2024-12-07 08:14:47.765097] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:36.652 08:14:47 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:36.652 08:14:47 -- host/multipath.sh@44 -- # bdevperf_pid=99118 00:23:36.652 08:14:47 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:36.652 08:14:47 -- host/multipath.sh@47 -- # waitforlisten 99118 /var/tmp/bdevperf.sock 00:23:36.652 08:14:47 -- common/autotest_common.sh@829 -- # '[' -z 99118 ']' 00:23:36.652 08:14:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.652 08:14:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:36.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.652 08:14:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.652 08:14:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:36.652 08:14:47 -- common/autotest_common.sh@10 -- # set +x 00:23:37.586 08:14:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:37.586 08:14:48 -- common/autotest_common.sh@862 -- # return 0 00:23:37.586 08:14:48 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:37.844 08:14:49 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:38.413 Nvme0n1 00:23:38.413 08:14:49 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:38.671 Nvme0n1 00:23:38.671 08:14:49 -- host/multipath.sh@78 -- # sleep 1 00:23:38.671 08:14:49 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:39.608 08:14:50 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:39.608 08:14:50 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:39.866 08:14:51 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:40.126 08:14:51 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:40.126 08:14:51 -- host/multipath.sh@65 -- # dtrace_pid=99205 00:23:40.126 08:14:51 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99014 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:40.126 08:14:51 -- host/multipath.sh@66 -- # sleep 6 00:23:46.689 08:14:57 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:46.689 08:14:57 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:46.689 08:14:57 -- host/multipath.sh@67 -- # active_port=4421 00:23:46.689 08:14:57 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:46.689 Attaching 4 probes... 00:23:46.689 @path[10.0.0.2, 4421]: 21186 00:23:46.689 @path[10.0.0.2, 4421]: 21681 00:23:46.689 @path[10.0.0.2, 4421]: 21561 00:23:46.689 @path[10.0.0.2, 4421]: 21656 00:23:46.689 @path[10.0.0.2, 4421]: 22060 00:23:46.689 08:14:57 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:46.689 08:14:57 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:46.689 08:14:57 -- host/multipath.sh@69 -- # sed -n 1p 00:23:46.689 08:14:57 -- host/multipath.sh@69 -- # port=4421 00:23:46.689 08:14:57 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:46.689 08:14:57 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:46.689 08:14:57 -- host/multipath.sh@72 -- # kill 99205 00:23:46.689 08:14:57 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:46.689 08:14:57 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:46.689 08:14:57 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:46.689 08:14:57 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:46.948 08:14:58 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:46.948 08:14:58 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99014 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:46.948 08:14:58 -- host/multipath.sh@65 -- # dtrace_pid=99342 00:23:46.948 08:14:58 -- host/multipath.sh@66 -- # sleep 6 00:23:53.517 08:15:04 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:53.517 08:15:04 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:53.517 08:15:04 -- host/multipath.sh@67 -- # active_port=4420 00:23:53.517 08:15:04 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:53.517 Attaching 4 probes... 00:23:53.517 @path[10.0.0.2, 4420]: 21465 00:23:53.517 @path[10.0.0.2, 4420]: 21284 00:23:53.517 @path[10.0.0.2, 4420]: 20481 00:23:53.517 @path[10.0.0.2, 4420]: 20095 00:23:53.517 @path[10.0.0.2, 4420]: 20756 00:23:53.517 08:15:04 -- host/multipath.sh@69 -- # sed -n 1p 00:23:53.517 08:15:04 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:53.517 08:15:04 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:53.517 08:15:04 -- host/multipath.sh@69 -- # port=4420 00:23:53.517 08:15:04 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:53.517 08:15:04 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:53.517 08:15:04 -- host/multipath.sh@72 -- # kill 99342 00:23:53.517 08:15:04 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:53.517 08:15:04 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:53.517 08:15:04 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:53.517 08:15:04 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:53.776 08:15:04 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:53.776 08:15:04 -- host/multipath.sh@65 -- # dtrace_pid=99468 00:23:53.776 08:15:04 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99014 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:53.776 08:15:04 -- host/multipath.sh@66 -- # sleep 6 00:24:00.331 08:15:10 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:00.331 08:15:10 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:00.331 08:15:11 -- host/multipath.sh@67 -- # active_port=4421 00:24:00.331 08:15:11 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:00.331 Attaching 4 probes... 00:24:00.331 @path[10.0.0.2, 4421]: 15748 00:24:00.331 @path[10.0.0.2, 4421]: 21033 00:24:00.331 @path[10.0.0.2, 4421]: 21099 00:24:00.331 @path[10.0.0.2, 4421]: 21127 00:24:00.331 @path[10.0.0.2, 4421]: 20999 00:24:00.331 08:15:11 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:00.331 08:15:11 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:00.331 08:15:11 -- host/multipath.sh@69 -- # sed -n 1p 00:24:00.331 08:15:11 -- host/multipath.sh@69 -- # port=4421 00:24:00.331 08:15:11 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:00.331 08:15:11 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:00.331 08:15:11 -- host/multipath.sh@72 -- # kill 99468 00:24:00.331 08:15:11 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:00.331 08:15:11 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:24:00.331 08:15:11 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:00.331 08:15:11 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:00.589 08:15:11 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:24:00.589 08:15:11 -- host/multipath.sh@65 -- # dtrace_pid=99604 00:24:00.589 08:15:11 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99014 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:00.589 08:15:11 -- host/multipath.sh@66 -- # sleep 6 00:24:07.152 08:15:17 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:07.152 08:15:17 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:24:07.152 08:15:18 -- host/multipath.sh@67 -- # active_port= 00:24:07.152 08:15:18 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:07.152 Attaching 4 probes... 00:24:07.152 00:24:07.152 00:24:07.152 00:24:07.152 00:24:07.152 00:24:07.152 08:15:18 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:07.152 08:15:18 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:07.152 08:15:18 -- host/multipath.sh@69 -- # sed -n 1p 00:24:07.152 08:15:18 -- host/multipath.sh@69 -- # port= 00:24:07.152 08:15:18 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:24:07.152 08:15:18 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:24:07.152 08:15:18 -- host/multipath.sh@72 -- # kill 99604 00:24:07.152 08:15:18 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:07.152 08:15:18 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:24:07.152 08:15:18 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:07.152 08:15:18 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:07.410 08:15:18 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:24:07.410 08:15:18 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99014 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:07.410 08:15:18 -- host/multipath.sh@65 -- # dtrace_pid=99736 00:24:07.410 08:15:18 -- host/multipath.sh@66 -- # sleep 6 00:24:13.977 08:15:24 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:13.977 08:15:24 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:13.977 08:15:24 -- host/multipath.sh@67 -- # active_port=4421 00:24:13.977 08:15:24 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:13.977 Attaching 4 probes... 00:24:13.977 @path[10.0.0.2, 4421]: 20603 00:24:13.977 @path[10.0.0.2, 4421]: 20888 00:24:13.977 @path[10.0.0.2, 4421]: 20796 00:24:13.978 @path[10.0.0.2, 4421]: 20713 00:24:13.978 @path[10.0.0.2, 4421]: 20766 00:24:13.978 08:15:24 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:13.978 08:15:24 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:13.978 08:15:24 -- host/multipath.sh@69 -- # sed -n 1p 00:24:13.978 08:15:24 -- host/multipath.sh@69 -- # port=4421 00:24:13.978 08:15:24 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:13.978 08:15:24 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:13.978 08:15:24 -- host/multipath.sh@72 -- # kill 99736 00:24:13.978 08:15:24 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:13.978 08:15:24 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:13.978 [2024-12-07 08:15:25.016471] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016528] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016540] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016548] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016557] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016565] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016589] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016628] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016643] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016650] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016658] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016665] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016673] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016680] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016687] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016694] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016724] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016732] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016755] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016787] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016794] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016803] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016811] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016818] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016826] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016834] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016842] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016849] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016857] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016865] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016873] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016881] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016889] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016896] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016911] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016919] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016927] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016934] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016942] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016949] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016957] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016964] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016972] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016980] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016988] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.016996] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017004] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017012] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017020] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017035] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017044] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017052] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017059] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017067] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017075] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017083] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017090] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017098] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017105] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017113] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017121] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017129] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017137] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017151] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017159] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017167] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017175] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 [2024-12-07 08:15:25.017183] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2074370 is same with the state(5) to be set 00:24:13.978 08:15:25 -- host/multipath.sh@101 -- # sleep 1 00:24:14.916 08:15:26 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:24:14.916 08:15:26 -- host/multipath.sh@65 -- # dtrace_pid=99870 00:24:14.916 08:15:26 -- host/multipath.sh@66 -- # sleep 6 00:24:14.916 08:15:26 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99014 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:21.583 08:15:32 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:21.583 08:15:32 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:21.583 08:15:32 -- host/multipath.sh@67 -- # active_port=4420 00:24:21.583 08:15:32 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:21.583 Attaching 4 probes... 00:24:21.583 @path[10.0.0.2, 4420]: 19931 00:24:21.583 @path[10.0.0.2, 4420]: 21246 00:24:21.583 @path[10.0.0.2, 4420]: 21318 00:24:21.583 @path[10.0.0.2, 4420]: 21117 00:24:21.584 @path[10.0.0.2, 4420]: 21454 00:24:21.584 08:15:32 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:21.584 08:15:32 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:21.584 08:15:32 -- host/multipath.sh@69 -- # sed -n 1p 00:24:21.584 08:15:32 -- host/multipath.sh@69 -- # port=4420 00:24:21.584 08:15:32 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:21.584 08:15:32 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:21.584 08:15:32 -- host/multipath.sh@72 -- # kill 99870 00:24:21.584 08:15:32 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:21.584 08:15:32 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:21.584 [2024-12-07 08:15:32.605097] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:21.584 08:15:32 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:21.842 08:15:32 -- host/multipath.sh@111 -- # sleep 6 00:24:28.416 08:15:38 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:28.416 08:15:38 -- host/multipath.sh@65 -- # dtrace_pid=100064 00:24:28.416 08:15:38 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99014 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:28.416 08:15:38 -- host/multipath.sh@66 -- # sleep 6 00:24:33.679 08:15:44 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:33.679 08:15:44 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:33.937 08:15:45 -- host/multipath.sh@67 -- # active_port=4421 00:24:33.937 08:15:45 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:33.937 Attaching 4 probes... 00:24:33.937 @path[10.0.0.2, 4421]: 20850 00:24:33.937 @path[10.0.0.2, 4421]: 21098 00:24:33.937 @path[10.0.0.2, 4421]: 21151 00:24:33.937 @path[10.0.0.2, 4421]: 21130 00:24:33.937 @path[10.0.0.2, 4421]: 20969 00:24:33.937 08:15:45 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:33.937 08:15:45 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:33.937 08:15:45 -- host/multipath.sh@69 -- # sed -n 1p 00:24:33.937 08:15:45 -- host/multipath.sh@69 -- # port=4421 00:24:33.937 08:15:45 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:33.937 08:15:45 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:33.937 08:15:45 -- host/multipath.sh@72 -- # kill 100064 00:24:33.937 08:15:45 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:33.937 08:15:45 -- host/multipath.sh@114 -- # killprocess 99118 00:24:33.937 08:15:45 -- common/autotest_common.sh@936 -- # '[' -z 99118 ']' 00:24:33.937 08:15:45 -- common/autotest_common.sh@940 -- # kill -0 99118 00:24:33.937 08:15:45 -- common/autotest_common.sh@941 -- # uname 00:24:33.937 08:15:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:33.937 08:15:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99118 00:24:34.214 killing process with pid 99118 00:24:34.214 08:15:45 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:34.214 08:15:45 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:34.214 08:15:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99118' 00:24:34.214 08:15:45 -- common/autotest_common.sh@955 -- # kill 99118 00:24:34.214 08:15:45 -- common/autotest_common.sh@960 -- # wait 99118 00:24:34.214 Connection closed with partial response: 00:24:34.214 00:24:34.214 00:24:34.214 08:15:45 -- host/multipath.sh@116 -- # wait 99118 00:24:34.214 08:15:45 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:34.214 [2024-12-07 08:14:47.824712] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:34.214 [2024-12-07 08:14:47.824823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99118 ] 00:24:34.214 [2024-12-07 08:14:47.959881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.214 [2024-12-07 08:14:48.039645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.214 Running I/O for 90 seconds... 00:24:34.214 [2024-12-07 08:14:58.151960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-12-07 08:14:58.152030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:34.214 [2024-12-07 08:14:58.152083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-12-07 08:14:58.152100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:34.214 [2024-12-07 08:14:58.152120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-12-07 08:14:58.152134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:34.214 [2024-12-07 08:14:58.152153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-12-07 08:14:58.152167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:34.214 [2024-12-07 08:14:58.152186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-12-07 08:14:58.152199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.214 [2024-12-07 08:14:58.152267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-12-07 08:14:58.152285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.214 [2024-12-07 08:14:58.152307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-12-07 08:14:58.152323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.214 [2024-12-07 08:14:58.152344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-12-07 08:14:58.152360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:34.214 [2024-12-07 08:14:58.152381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-12-07 08:14:58.152396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:34.214 [2024-12-07 08:14:58.152418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.214 [2024-12-07 08:14:58.152433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:34.214 [2024-12-07 08:14:58.152454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-12-07 08:14:58.152499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:34.214 [2024-12-07 08:14:58.152524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-12-07 08:14:58.152539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:34.214 [2024-12-07 08:14:58.152560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-12-07 08:14:58.152575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:34.214 [2024-12-07 08:14:58.152612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-12-07 08:14:58.152641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:34.214 [2024-12-07 08:14:58.152659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-12-07 08:14:58.152673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:34.214 [2024-12-07 08:14:58.152692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-12-07 08:14:58.152705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:34.214 [2024-12-07 08:14:58.152724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-12-07 08:14:58.152737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:34.214 [2024-12-07 08:14:58.152773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-12-07 08:14:58.152787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:34.214 [2024-12-07 08:14:58.152807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-12-07 08:14:58.152821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:34.214 [2024-12-07 08:14:58.152840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-12-07 08:14:58.152855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:34.214 [2024-12-07 08:14:58.152874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-12-07 08:14:58.152888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:34.214 [2024-12-07 08:14:58.152907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.214 [2024-12-07 08:14:58.152922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.152942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.152955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.152984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.152999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.153019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-12-07 08:14:58.153033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.153053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-12-07 08:14:58.153067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.153087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.153101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.153138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-12-07 08:14:58.153153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.153173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-12-07 08:14:58.153204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.153242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.153257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.153279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-12-07 08:14:58.153306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.153330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-12-07 08:14:58.153345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.153367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.153383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.153405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.153420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.153442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.153458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.153488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-12-07 08:14:58.153505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.153526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-12-07 08:14:58.153541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.153563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.153579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-12-07 08:14:58.154157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.154199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.154261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.154302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-12-07 08:14:58.154340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.154378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-12-07 08:14:58.154415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.154452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.154488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.154536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.154589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.154625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.154677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.154712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.154746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.154781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.215 [2024-12-07 08:14:58.154815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-12-07 08:14:58.154850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-12-07 08:14:58.154884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-12-07 08:14:58.154919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-12-07 08:14:58.154955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.154976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.215 [2024-12-07 08:14:58.154997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:34.215 [2024-12-07 08:14:58.155019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-12-07 08:14:58.155034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.155069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.155103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.155138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:26400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.155172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.155223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.155302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.155340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.155377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.155415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.155452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.155489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.155535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.155572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-12-07 08:14:58.155624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-12-07 08:14:58.155660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-12-07 08:14:58.155695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.155746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:27088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.155780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.155815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-12-07 08:14:58.155849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.155883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-12-07 08:14:58.155922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.155958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.155985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-12-07 08:14:58.156001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.156021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-12-07 08:14:58.156036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.156055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-12-07 08:14:58.156070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.156090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.156104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.156125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.156139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.156159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.156190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.156227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.156243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.156277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.156301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.156323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.156338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.156360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.156375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.156396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.156411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.156433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.156448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.156469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-12-07 08:14:58.156496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.156519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.216 [2024-12-07 08:14:58.156535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:34.216 [2024-12-07 08:14:58.156556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.216 [2024-12-07 08:14:58.156572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.156609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-12-07 08:14:58.156623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.156644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:27200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.217 [2024-12-07 08:14:58.156659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.156680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.217 [2024-12-07 08:14:58.156694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.156715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-12-07 08:14:58.156730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.156751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-12-07 08:14:58.156765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.156786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-12-07 08:14:58.156801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.156821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.217 [2024-12-07 08:14:58.156836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.156857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-12-07 08:14:58.156872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.156893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.217 [2024-12-07 08:14:58.156908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.156929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-12-07 08:14:58.156950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.156973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-12-07 08:14:58.156988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.157008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:27280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.217 [2024-12-07 08:14:58.157023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.157044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.217 [2024-12-07 08:14:58.157059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.157801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.217 [2024-12-07 08:14:58.157830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.157858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-12-07 08:14:58.157874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.157896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-12-07 08:14:58.157912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.157934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-12-07 08:14:58.157950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.157972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.217 [2024-12-07 08:14:58.157997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.158019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.217 [2024-12-07 08:14:58.158034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.158056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.217 [2024-12-07 08:14:58.158072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.158108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.217 [2024-12-07 08:14:58.158123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.158144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-12-07 08:14:58.158159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.158192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.217 [2024-12-07 08:14:58.158220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.158255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-12-07 08:14:58.158275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.158298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-12-07 08:14:58.158314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.158335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:27392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.217 [2024-12-07 08:14:58.158351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.158372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-12-07 08:14:58.158387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.158409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:27408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.217 [2024-12-07 08:14:58.158424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.158446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.217 [2024-12-07 08:14:58.158461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.158482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-12-07 08:14:58.158498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.158519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-12-07 08:14:58.158549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.158570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-12-07 08:14:58.158586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.158607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-12-07 08:14:58.158622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.158643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-12-07 08:14:58.158657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.158687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-12-07 08:14:58.158704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.158725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.217 [2024-12-07 08:14:58.158739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.158760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.217 [2024-12-07 08:14:58.158775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:34.217 [2024-12-07 08:14:58.158795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.217 [2024-12-07 08:14:58.158810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.158831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.218 [2024-12-07 08:14:58.158846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.158867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.158882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.158904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.158919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.158940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.158954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.158975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.158991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.159041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.159075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.159109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.159150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.159187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.159251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.159293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.159335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.159374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.159410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.218 [2024-12-07 08:14:58.159447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.218 [2024-12-07 08:14:58.159484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.159521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.218 [2024-12-07 08:14:58.159559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.218 [2024-12-07 08:14:58.159610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.159653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.218 [2024-12-07 08:14:58.159691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.218 [2024-12-07 08:14:58.159743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.159777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.159813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.159847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.218 [2024-12-07 08:14:58.159881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.159901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.218 [2024-12-07 08:14:58.159916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.160465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.160492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.160534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.218 [2024-12-07 08:14:58.160551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.160587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.160601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.160622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.160636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.160657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.218 [2024-12-07 08:14:58.160681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:34.218 [2024-12-07 08:14:58.160703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.219 [2024-12-07 08:14:58.160718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.160739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.160753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.160773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.219 [2024-12-07 08:14:58.160787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.160807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.160821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.160841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.160855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.160875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.160889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.160909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.160923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.160943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.160957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.160977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.160991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.161025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.161058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.161093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.161136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.219 [2024-12-07 08:14:58.161187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.219 [2024-12-07 08:14:58.161246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.219 [2024-12-07 08:14:58.161301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.219 [2024-12-07 08:14:58.161337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.219 [2024-12-07 08:14:58.161374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.219 [2024-12-07 08:14:58.161410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.161448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.161484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.161535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.161583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.161617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.161661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:26480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.161724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.161763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.161800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.161837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.161873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.161915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.161952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.161973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.219 [2024-12-07 08:14:58.161988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.162021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.219 [2024-12-07 08:14:58.162035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.162056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.219 [2024-12-07 08:14:58.162070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.162091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.162105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.162141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.162162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.162184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.219 [2024-12-07 08:14:58.162199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:34.219 [2024-12-07 08:14:58.162246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.219 [2024-12-07 08:14:58.162272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.162297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.162313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.174436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.220 [2024-12-07 08:14:58.174473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.174501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.174518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.174554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.220 [2024-12-07 08:14:58.174569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.174590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.220 [2024-12-07 08:14:58.174619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.174640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.220 [2024-12-07 08:14:58.174655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.174676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.174690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.174710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.174725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.174746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.174760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.174796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.174823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.174846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.174861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.174882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.174897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.174917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.174932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.174952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.174966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.174987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.175001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.175022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.220 [2024-12-07 08:14:58.175036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.175057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.175071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.175092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.220 [2024-12-07 08:14:58.175106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.175127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.220 [2024-12-07 08:14:58.175141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.175162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.175177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.175197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.175228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.175249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.220 [2024-12-07 08:14:58.175281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.175323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.220 [2024-12-07 08:14:58.175340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.175362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.220 [2024-12-07 08:14:58.175377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.175398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:27240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.175413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.175434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.220 [2024-12-07 08:14:58.175450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.175471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.175485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.175506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.220 [2024-12-07 08:14:58.175521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.175542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.220 [2024-12-07 08:14:58.175557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.175593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.175608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.176376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.176406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.176434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.176451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.176473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.220 [2024-12-07 08:14:58.176488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.176510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.220 [2024-12-07 08:14:58.176525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.176561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.220 [2024-12-07 08:14:58.176577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.176599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:27328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.176614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.176636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.176651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.176672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.176687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:34.220 [2024-12-07 08:14:58.176709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.220 [2024-12-07 08:14:58.176724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.176745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.221 [2024-12-07 08:14:58.176759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.176781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.176796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.176817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.221 [2024-12-07 08:14:58.176831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.176853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.221 [2024-12-07 08:14:58.176867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.176888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.176903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.176924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.221 [2024-12-07 08:14:58.176939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.176975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:27408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.176989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.177031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.221 [2024-12-07 08:14:58.177068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.221 [2024-12-07 08:14:58.177103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.221 [2024-12-07 08:14:58.177138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.221 [2024-12-07 08:14:58.177172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.221 [2024-12-07 08:14:58.177232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.221 [2024-12-07 08:14:58.177286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.177323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.177360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.221 [2024-12-07 08:14:58.177396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.221 [2024-12-07 08:14:58.177432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.177468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.177513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.177551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.177599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.177633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.177668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.177735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.177797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.177846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.177895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.177946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.177976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.177995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.178025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.178045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.178075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.178096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.178146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.221 [2024-12-07 08:14:58.178167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.178196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.221 [2024-12-07 08:14:58.178222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.178269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.178291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.178320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.221 [2024-12-07 08:14:58.178341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.178370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.221 [2024-12-07 08:14:58.178389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.178418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.221 [2024-12-07 08:14:58.178439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:34.221 [2024-12-07 08:14:58.178468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.222 [2024-12-07 08:14:58.178488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.178516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.222 [2024-12-07 08:14:58.178543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.178573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.178593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.178622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.178642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.178671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.178690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.178720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.222 [2024-12-07 08:14:58.178740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.179542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.222 [2024-12-07 08:14:58.179588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.179623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.179645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.179676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.222 [2024-12-07 08:14:58.179697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.179726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.179746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.179775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.179795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.179825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.179844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.179884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.222 [2024-12-07 08:14:58.179904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.179933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.179953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.179982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.222 [2024-12-07 08:14:58.180002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.180031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.180051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.180080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.180100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.180129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.180149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.180178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.180239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.180273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.180295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.180324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.180344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.180373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.180393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.180422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.180443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.180473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.180500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.180530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.180551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.180580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.222 [2024-12-07 08:14:58.180600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.180650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.222 [2024-12-07 08:14:58.180671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.180700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.222 [2024-12-07 08:14:58.180720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.180749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.222 [2024-12-07 08:14:58.180769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.180798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.222 [2024-12-07 08:14:58.180819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.180848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.222 [2024-12-07 08:14:58.180877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.180908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.180928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.180958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.180978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.181007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.181028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.181056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.181076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.181106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.181126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.181155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.181176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.181243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.181267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.181297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.222 [2024-12-07 08:14:58.181317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:34.222 [2024-12-07 08:14:58.181346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.181366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.181395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.181415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.181444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.181464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.181493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.181522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.181553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.181580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.181609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.223 [2024-12-07 08:14:58.181630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.181659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.223 [2024-12-07 08:14:58.181679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.181734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.223 [2024-12-07 08:14:58.181763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.181792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.181812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.181841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.181861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.181890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.181910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.181939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.223 [2024-12-07 08:14:58.181959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.181988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.182008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.182047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.223 [2024-12-07 08:14:58.182068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.182097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.182122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.182159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.223 [2024-12-07 08:14:58.182179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.182239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.223 [2024-12-07 08:14:58.182274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.182303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.223 [2024-12-07 08:14:58.182323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.182353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.182373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.182402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.182422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.182451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.182471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.182500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.182521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.182560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.182589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.182619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.182638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.182668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.182687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.182716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.182736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.182765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.182785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.182814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.223 [2024-12-07 08:14:58.182833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.182871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.182893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.182922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.223 [2024-12-07 08:14:58.182943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.182972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.223 [2024-12-07 08:14:58.182991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.183020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.183040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.183069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.223 [2024-12-07 08:14:58.183089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.183118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.223 [2024-12-07 08:14:58.183138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.223 [2024-12-07 08:14:58.183166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.224 [2024-12-07 08:14:58.183186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.183242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.224 [2024-12-07 08:14:58.183263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.183293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.224 [2024-12-07 08:14:58.183313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.183342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.224 [2024-12-07 08:14:58.183363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.183401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.224 [2024-12-07 08:14:58.183422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.183451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.224 [2024-12-07 08:14:58.183471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.183500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.224 [2024-12-07 08:14:58.183529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.184526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.224 [2024-12-07 08:14:58.184563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.184600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.224 [2024-12-07 08:14:58.184637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.184667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.224 [2024-12-07 08:14:58.184687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.184716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.224 [2024-12-07 08:14:58.184736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.184765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.224 [2024-12-07 08:14:58.184786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.184815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.224 [2024-12-07 08:14:58.184834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.184864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.224 [2024-12-07 08:14:58.184884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.184923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.224 [2024-12-07 08:14:58.184943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.184972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:27344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.224 [2024-12-07 08:14:58.184992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.185021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.224 [2024-12-07 08:14:58.185041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.185070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.224 [2024-12-07 08:14:58.185090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.185119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.224 [2024-12-07 08:14:58.185151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.185184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.224 [2024-12-07 08:14:58.185235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.185267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.224 [2024-12-07 08:14:58.185288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.185318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:27392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.224 [2024-12-07 08:14:58.185338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.185367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.224 [2024-12-07 08:14:58.185387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.185416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.224 [2024-12-07 08:14:58.185436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.185465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.224 [2024-12-07 08:14:58.185485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.185515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.224 [2024-12-07 08:14:58.185535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.185564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.224 [2024-12-07 08:14:58.185593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.185626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.224 [2024-12-07 08:14:58.185646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.185675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.224 [2024-12-07 08:14:58.185719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.185761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.224 [2024-12-07 08:14:58.185781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.185810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.224 [2024-12-07 08:14:58.185830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.185871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.224 [2024-12-07 08:14:58.185892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.185921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.224 [2024-12-07 08:14:58.185941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.185970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.224 [2024-12-07 08:14:58.185990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.186019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.224 [2024-12-07 08:14:58.186040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.186068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.224 [2024-12-07 08:14:58.186088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.186122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.224 [2024-12-07 08:14:58.186148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.186179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.224 [2024-12-07 08:14:58.186222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.186255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.224 [2024-12-07 08:14:58.186276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:34.224 [2024-12-07 08:14:58.186305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.186325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.186354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.186374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.186404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.186423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.186453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.186473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.186512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.186534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.186580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.186600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.186629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.186649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.186679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.186698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.186739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.186759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.186788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.186808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.186837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.225 [2024-12-07 08:14:58.186856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.186885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.225 [2024-12-07 08:14:58.186905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.186934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.186954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.186983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.225 [2024-12-07 08:14:58.187004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.187033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.225 [2024-12-07 08:14:58.187053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.187082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.187101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.187130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.225 [2024-12-07 08:14:58.187158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.187189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.225 [2024-12-07 08:14:58.187233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.187264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.187284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.187313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.187333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.187363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.187382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.188131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.225 [2024-12-07 08:14:58.188167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.188241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.225 [2024-12-07 08:14:58.188266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.188296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.188316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.188346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.225 [2024-12-07 08:14:58.188366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.188401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.188421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.188450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.188470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.188499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.188519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.188548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.225 [2024-12-07 08:14:58.188593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.188632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.188652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.188681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.225 [2024-12-07 08:14:58.188701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.188731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.188751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.188780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.188800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.188829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.188848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.188877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.188898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.188926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.188946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.188975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.188995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.189024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.189044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.189074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.225 [2024-12-07 08:14:58.189094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:34.225 [2024-12-07 08:14:58.189123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.189142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.189171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.189191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.189264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.226 [2024-12-07 08:14:58.189287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.189316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.226 [2024-12-07 08:14:58.189336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.189366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.226 [2024-12-07 08:14:58.189386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.189414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.226 [2024-12-07 08:14:58.189434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.189463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.226 [2024-12-07 08:14:58.189483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.189512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.226 [2024-12-07 08:14:58.189532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.189561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.189581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.189617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.189637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.189666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.189686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.189746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.189767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.189797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:26448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.189817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.189846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.189866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.189906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.189927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.189957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.189976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.190029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.190079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.190127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.190187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.190265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.226 [2024-12-07 08:14:58.190300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.226 [2024-12-07 08:14:58.190336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.226 [2024-12-07 08:14:58.190372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:27080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.190408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.190444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.190493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.226 [2024-12-07 08:14:58.190529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.190593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.226 [2024-12-07 08:14:58.190626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.190661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.226 [2024-12-07 08:14:58.190694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.226 [2024-12-07 08:14:58.190728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.226 [2024-12-07 08:14:58.190762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.190795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.190829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.190862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.190895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:34.226 [2024-12-07 08:14:58.190915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.226 [2024-12-07 08:14:58.190935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.190956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.227 [2024-12-07 08:14:58.190970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.190990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.227 [2024-12-07 08:14:58.191004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.191024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.227 [2024-12-07 08:14:58.191037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.191057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.227 [2024-12-07 08:14:58.191071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.191091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.227 [2024-12-07 08:14:58.191105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.191125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.227 [2024-12-07 08:14:58.191138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.191158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.227 [2024-12-07 08:14:58.191172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.191192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.227 [2024-12-07 08:14:58.191223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.191260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.227 [2024-12-07 08:14:58.191288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.191311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.227 [2024-12-07 08:14:58.191326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.191348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.227 [2024-12-07 08:14:58.191362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.191383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.227 [2024-12-07 08:14:58.191406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.191429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.227 [2024-12-07 08:14:58.191444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.191465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.227 [2024-12-07 08:14:58.191480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.191501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.227 [2024-12-07 08:14:58.191516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.191537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.227 [2024-12-07 08:14:58.191552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.191589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.227 [2024-12-07 08:14:58.191617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.192301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.227 [2024-12-07 08:14:58.192329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.192355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.227 [2024-12-07 08:14:58.192372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.192394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.227 [2024-12-07 08:14:58.192410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.192432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.227 [2024-12-07 08:14:58.192447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.192468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.227 [2024-12-07 08:14:58.192482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.192504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.227 [2024-12-07 08:14:58.192519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.192540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.227 [2024-12-07 08:14:58.192554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.192616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.227 [2024-12-07 08:14:58.192632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.192652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.227 [2024-12-07 08:14:58.192666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.192685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.227 [2024-12-07 08:14:58.192699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.192719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.227 [2024-12-07 08:14:58.192733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.192753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.227 [2024-12-07 08:14:58.192767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.192786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.227 [2024-12-07 08:14:58.192800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.192820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.227 [2024-12-07 08:14:58.192834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.192853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.227 [2024-12-07 08:14:58.192867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.192887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:27392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.227 [2024-12-07 08:14:58.192907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.192928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.227 [2024-12-07 08:14:58.192942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.192962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.227 [2024-12-07 08:14:58.192976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:34.227 [2024-12-07 08:14:58.193001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.193016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.228 [2024-12-07 08:14:58.193059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.228 [2024-12-07 08:14:58.193094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.228 [2024-12-07 08:14:58.193128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.228 [2024-12-07 08:14:58.193162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.228 [2024-12-07 08:14:58.193196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.228 [2024-12-07 08:14:58.193278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.193317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.193354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.228 [2024-12-07 08:14:58.193397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.228 [2024-12-07 08:14:58.193433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.193469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.193505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.193554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.193616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.193650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.193685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.193750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.193786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.193822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.193858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.193894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.193931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.193975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.193996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.194023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.194043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.228 [2024-12-07 08:14:58.194065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.194087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.228 [2024-12-07 08:14:58.194102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.194137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.194151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.194171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.228 [2024-12-07 08:14:58.194185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.194217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.228 [2024-12-07 08:14:58.194232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.194264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.194282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.194303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.228 [2024-12-07 08:14:58.194318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.194340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.228 [2024-12-07 08:14:58.194354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.194376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.194391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.194412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.194427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.194944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.194969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.194993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.228 [2024-12-07 08:14:58.195009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.195029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.228 [2024-12-07 08:14:58.195043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.195074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.228 [2024-12-07 08:14:58.195089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.228 [2024-12-07 08:14:58.195110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.229 [2024-12-07 08:14:58.195123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.195157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.195191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.195271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.229 [2024-12-07 08:14:58.195310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.195346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.229 [2024-12-07 08:14:58.195382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.195418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.195454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.195490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.195526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.195571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.195636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.195670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.195704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.195737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.195771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.229 [2024-12-07 08:14:58.195804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.229 [2024-12-07 08:14:58.195837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.229 [2024-12-07 08:14:58.195871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.229 [2024-12-07 08:14:58.195904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.229 [2024-12-07 08:14:58.195938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.229 [2024-12-07 08:14:58.195972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.195992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.196012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.196034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.196048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.196067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.196081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.196107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.196122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.196142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.196156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.196176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.196190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.196226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.196268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.196293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.196308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.196330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.196345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.196366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:26536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.196381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.196402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.196417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.196438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.196453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.196474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.229 [2024-12-07 08:14:58.196496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.196520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.229 [2024-12-07 08:14:58.196535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.196556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.229 [2024-12-07 08:14:58.196571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.196607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.229 [2024-12-07 08:14:58.196622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:34.229 [2024-12-07 08:14:58.196642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.196671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.196692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.196706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.196725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.196739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.196765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.230 [2024-12-07 08:14:58.196780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.196800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.196814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.196834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.230 [2024-12-07 08:14:58.196848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.196867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.196881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.196901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.230 [2024-12-07 08:14:58.196915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.196935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.230 [2024-12-07 08:14:58.196949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.196975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.230 [2024-12-07 08:14:58.196990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.197010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.197024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.197044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.197057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.197077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.197091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.197112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.197126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.197146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.197160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.205994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.206043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.206068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.206084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.206120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.206134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.206154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.206169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.206190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.230 [2024-12-07 08:14:58.206215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.206288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.206306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.206343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.230 [2024-12-07 08:14:58.206360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.206381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.230 [2024-12-07 08:14:58.206396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.206417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.206432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.206453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.206468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.206489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.230 [2024-12-07 08:14:58.206504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.206525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.230 [2024-12-07 08:14:58.206540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.206590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.230 [2024-12-07 08:14:58.206604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.206624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.206638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.206658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.230 [2024-12-07 08:14:58.206672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.206692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.206707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.207449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.230 [2024-12-07 08:14:58.207480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.207509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.230 [2024-12-07 08:14:58.207526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.207547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.207575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.207628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.207643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.207663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.230 [2024-12-07 08:14:58.207677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.207697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.230 [2024-12-07 08:14:58.207711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.207732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.230 [2024-12-07 08:14:58.207746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.207766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.230 [2024-12-07 08:14:58.207780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:34.230 [2024-12-07 08:14:58.207800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:27328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.207814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.207834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.207848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.207868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.207882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.207903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.207917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.207937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.231 [2024-12-07 08:14:58.207952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.207972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.207986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.231 [2024-12-07 08:14:58.208028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.231 [2024-12-07 08:14:58.208064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.208098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.231 [2024-12-07 08:14:58.208133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.208166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.208200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.231 [2024-12-07 08:14:58.208298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.231 [2024-12-07 08:14:58.208337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.231 [2024-12-07 08:14:58.208373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.231 [2024-12-07 08:14:58.208410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.231 [2024-12-07 08:14:58.208446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.231 [2024-12-07 08:14:58.208482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.208530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.208569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.231 [2024-12-07 08:14:58.208636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.231 [2024-12-07 08:14:58.208671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.208705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.208739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.208773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.208806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.208840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.208874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.208908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.208942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.208962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.208976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.209003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.209018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.209039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.209053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.209073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.209087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.209107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.209121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.209141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.209155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.209176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.231 [2024-12-07 08:14:58.209190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.209226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.231 [2024-12-07 08:14:58.209269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.209293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.231 [2024-12-07 08:14:58.209309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:34.231 [2024-12-07 08:14:58.209330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.232 [2024-12-07 08:14:58.209345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.209367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.232 [2024-12-07 08:14:58.209381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.209402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.232 [2024-12-07 08:14:58.209417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.209438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.232 [2024-12-07 08:14:58.209453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.209482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.232 [2024-12-07 08:14:58.209498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.209519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.232 [2024-12-07 08:14:58.209534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.232 [2024-12-07 08:14:58.210154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.232 [2024-12-07 08:14:58.210195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.232 [2024-12-07 08:14:58.210263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.232 [2024-12-07 08:14:58.210302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.232 [2024-12-07 08:14:58.210338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.232 [2024-12-07 08:14:58.210374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.232 [2024-12-07 08:14:58.210410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.232 [2024-12-07 08:14:58.210446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.232 [2024-12-07 08:14:58.210482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.232 [2024-12-07 08:14:58.210519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.232 [2024-12-07 08:14:58.210581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.232 [2024-12-07 08:14:58.210617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.232 [2024-12-07 08:14:58.210651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.232 [2024-12-07 08:14:58.210685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.232 [2024-12-07 08:14:58.210719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.232 [2024-12-07 08:14:58.210753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.232 [2024-12-07 08:14:58.210786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.232 [2024-12-07 08:14:58.210820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.232 [2024-12-07 08:14:58.210853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.232 [2024-12-07 08:14:58.210888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.232 [2024-12-07 08:14:58.210922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.232 [2024-12-07 08:14:58.210956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.210976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.232 [2024-12-07 08:14:58.210996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.211018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.232 [2024-12-07 08:14:58.211032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.211052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.232 [2024-12-07 08:14:58.211066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.211086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.232 [2024-12-07 08:14:58.211100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.211120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.232 [2024-12-07 08:14:58.211134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.211154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.232 [2024-12-07 08:14:58.211168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.211188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.232 [2024-12-07 08:14:58.211201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:34.232 [2024-12-07 08:14:58.211251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.211267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.211288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.211304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.211325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.211340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.211360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.211375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.211396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.211411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.211432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.211447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.211476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.211493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.211514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.211529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.211550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:26536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.211565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.211616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.211631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.211650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.211664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.211684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.211697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.211717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.233 [2024-12-07 08:14:58.211731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.211751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.233 [2024-12-07 08:14:58.211765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.211784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.233 [2024-12-07 08:14:58.211798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.211818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.211832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.211852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.211865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.211885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.211899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.211926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.233 [2024-12-07 08:14:58.211941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.211961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.211975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.211995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.233 [2024-12-07 08:14:58.212009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.212029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.212043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.212063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.233 [2024-12-07 08:14:58.212077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.212097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.233 [2024-12-07 08:14:58.212111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.212131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.233 [2024-12-07 08:14:58.212145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.212164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.212178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.212198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.212245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.212281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.212298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.212320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.212335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.212356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.212371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.212392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.212414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.212437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.212452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.212474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.212489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.212510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.212524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.212545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.233 [2024-12-07 08:14:58.212560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.212581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.212611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.212646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.233 [2024-12-07 08:14:58.212659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.212679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.233 [2024-12-07 08:14:58.212693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:34.233 [2024-12-07 08:14:58.212714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.233 [2024-12-07 08:14:58.212728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.212747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:27208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.234 [2024-12-07 08:14:58.212761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.212781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.234 [2024-12-07 08:14:58.212795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.212815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.234 [2024-12-07 08:14:58.212829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.212848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.234 [2024-12-07 08:14:58.212868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.212890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.234 [2024-12-07 08:14:58.212904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.212925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.234 [2024-12-07 08:14:58.212939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.213681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.234 [2024-12-07 08:14:58.213732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.213760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.234 [2024-12-07 08:14:58.213776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.213798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.234 [2024-12-07 08:14:58.213814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.213835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.234 [2024-12-07 08:14:58.213851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.213872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.234 [2024-12-07 08:14:58.213887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.213908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:27296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.234 [2024-12-07 08:14:58.213923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.213945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.234 [2024-12-07 08:14:58.213960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.213981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.234 [2024-12-07 08:14:58.213996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.234 [2024-12-07 08:14:58.214043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.234 [2024-12-07 08:14:58.214080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.234 [2024-12-07 08:14:58.214143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.234 [2024-12-07 08:14:58.214178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.234 [2024-12-07 08:14:58.214235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.234 [2024-12-07 08:14:58.214290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:27368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.234 [2024-12-07 08:14:58.214328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.234 [2024-12-07 08:14:58.214364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.234 [2024-12-07 08:14:58.214400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.234 [2024-12-07 08:14:58.214436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.234 [2024-12-07 08:14:58.214472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.234 [2024-12-07 08:14:58.214509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.234 [2024-12-07 08:14:58.214560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.234 [2024-12-07 08:14:58.214608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.234 [2024-12-07 08:14:58.214652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.234 [2024-12-07 08:14:58.214686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.234 [2024-12-07 08:14:58.214720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.234 [2024-12-07 08:14:58.214777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.234 [2024-12-07 08:14:58.214813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.234 [2024-12-07 08:14:58.214849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.234 [2024-12-07 08:14:58.214884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.234 [2024-12-07 08:14:58.214919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.234 [2024-12-07 08:14:58.214954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:34.234 [2024-12-07 08:14:58.214975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.234 [2024-12-07 08:14:58.214989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.215010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.215025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.215045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.215060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.215096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.215121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.215143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.215157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.215178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.215192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.215244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.215259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.215280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.215308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.215333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.215349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.215371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.215386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.215407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.215428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.215451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.215466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.215487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.215503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.215524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.215539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.215560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.235 [2024-12-07 08:14:58.215575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.215597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.235 [2024-12-07 08:14:58.215634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.215655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.215670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.215690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.235 [2024-12-07 08:14:58.215704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.215724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.235 [2024-12-07 08:14:58.215738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.215759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.215773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.215799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.235 [2024-12-07 08:14:58.215813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.215834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.235 [2024-12-07 08:14:58.215849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.216372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.216399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.216425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.216442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.216465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.216480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.216501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.235 [2024-12-07 08:14:58.216517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.216538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.235 [2024-12-07 08:14:58.216554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.216589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.216629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.216651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.235 [2024-12-07 08:14:58.216666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.216686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.216700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.216720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.216734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.216754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.216768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.216788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.235 [2024-12-07 08:14:58.216802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.216821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.235 [2024-12-07 08:14:58.216835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:34.235 [2024-12-07 08:14:58.216855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.235 [2024-12-07 08:14:58.216869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.216889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.236 [2024-12-07 08:14:58.216903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.216923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.236 [2024-12-07 08:14:58.216937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.216957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.236 [2024-12-07 08:14:58.216971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.216991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.236 [2024-12-07 08:14:58.217005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.236 [2024-12-07 08:14:58.217038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.236 [2024-12-07 08:14:58.217081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.236 [2024-12-07 08:14:58.217115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.236 [2024-12-07 08:14:58.217149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.236 [2024-12-07 08:14:58.217183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.236 [2024-12-07 08:14:58.217249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.236 [2024-12-07 08:14:58.217302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.236 [2024-12-07 08:14:58.217339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.236 [2024-12-07 08:14:58.217374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.236 [2024-12-07 08:14:58.217410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.236 [2024-12-07 08:14:58.217446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.236 [2024-12-07 08:14:58.217482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.236 [2024-12-07 08:14:58.217518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.236 [2024-12-07 08:14:58.217567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:27032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.236 [2024-12-07 08:14:58.217614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.236 [2024-12-07 08:14:58.217648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.236 [2024-12-07 08:14:58.217682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.236 [2024-12-07 08:14:58.217746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:26480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.236 [2024-12-07 08:14:58.217782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:26496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.236 [2024-12-07 08:14:58.217818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.236 [2024-12-07 08:14:58.217854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.236 [2024-12-07 08:14:58.217889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.236 [2024-12-07 08:14:58.217925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:34.236 [2024-12-07 08:14:58.217946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:27040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.237 [2024-12-07 08:14:58.217961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.217983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.237 [2024-12-07 08:14:58.217998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.237 [2024-12-07 08:14:58.218052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.237 [2024-12-07 08:14:58.218090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.237 [2024-12-07 08:14:58.218140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.237 [2024-12-07 08:14:58.218173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.237 [2024-12-07 08:14:58.218220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:27096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.237 [2024-12-07 08:14:58.218293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.237 [2024-12-07 08:14:58.218334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.237 [2024-12-07 08:14:58.218371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.237 [2024-12-07 08:14:58.218407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.237 [2024-12-07 08:14:58.218444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.237 [2024-12-07 08:14:58.218480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.237 [2024-12-07 08:14:58.218516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.237 [2024-12-07 08:14:58.218560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.237 [2024-12-07 08:14:58.218598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.237 [2024-12-07 08:14:58.218640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.237 [2024-12-07 08:14:58.218677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.237 [2024-12-07 08:14:58.218713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.237 [2024-12-07 08:14:58.218749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.237 [2024-12-07 08:14:58.218785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.237 [2024-12-07 08:14:58.218821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.237 [2024-12-07 08:14:58.218857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.237 [2024-12-07 08:14:58.218893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.237 [2024-12-07 08:14:58.218929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.237 [2024-12-07 08:14:58.218979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.218999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.237 [2024-12-07 08:14:58.219014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.219059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.237 [2024-12-07 08:14:58.219075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.219096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.237 [2024-12-07 08:14:58.219111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.219133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.237 [2024-12-07 08:14:58.219148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.219169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.237 [2024-12-07 08:14:58.219184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.219205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.237 [2024-12-07 08:14:58.219220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.219256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.237 [2024-12-07 08:14:58.219278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.219300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.237 [2024-12-07 08:14:58.219315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:34.237 [2024-12-07 08:14:58.220210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.237 [2024-12-07 08:14:58.220255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.220294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.238 [2024-12-07 08:14:58.220313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.220335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.238 [2024-12-07 08:14:58.220350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.220371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.238 [2024-12-07 08:14:58.220385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.220406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.238 [2024-12-07 08:14:58.220421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.220454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:27288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.238 [2024-12-07 08:14:58.220470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.220491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.238 [2024-12-07 08:14:58.220506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.220527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.238 [2024-12-07 08:14:58.220542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.220577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.238 [2024-12-07 08:14:58.220592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.220612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.238 [2024-12-07 08:14:58.220627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.220647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.238 [2024-12-07 08:14:58.220677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.220698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.238 [2024-12-07 08:14:58.220712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.220732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:27344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.238 [2024-12-07 08:14:58.220747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.220784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.238 [2024-12-07 08:14:58.220799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.220821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.238 [2024-12-07 08:14:58.220836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.220858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.238 [2024-12-07 08:14:58.220873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.220894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.238 [2024-12-07 08:14:58.220909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.220930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.238 [2024-12-07 08:14:58.220952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.220975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.238 [2024-12-07 08:14:58.220991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.221012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.238 [2024-12-07 08:14:58.221027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.221049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.238 [2024-12-07 08:14:58.221064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.221085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:27416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.238 [2024-12-07 08:14:58.221099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.221120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.238 [2024-12-07 08:14:58.221136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.221157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.238 [2024-12-07 08:14:58.221172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.221193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.238 [2024-12-07 08:14:58.221208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.221229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.238 [2024-12-07 08:14:58.221244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.221280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.238 [2024-12-07 08:14:58.221297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.221318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.238 [2024-12-07 08:14:58.221333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.221355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.238 [2024-12-07 08:14:58.221370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.221391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.238 [2024-12-07 08:14:58.221413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.221436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.238 [2024-12-07 08:14:58.221452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.221473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.238 [2024-12-07 08:14:58.221488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.221509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.238 [2024-12-07 08:14:58.221524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:34.238 [2024-12-07 08:14:58.221546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:14:58.221561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:14:58.221582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:14:58.221597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:14:58.221617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:14:58.221632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:14:58.221654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:14:58.221674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:14:58.221707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:14:58.221725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:14:58.221747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:14:58.221762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:14:58.221784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:14:58.221799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:14:58.221820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:14:58.221835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:14:58.221857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:14:58.221871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:14:58.221901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:14:58.221917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:14:58.221938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:14:58.221954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:14:58.221975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:14:58.221991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:14:58.222012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:14:58.222042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:14:58.222062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.239 [2024-12-07 08:14:58.222077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:14:58.222114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.239 [2024-12-07 08:14:58.222144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:14:58.222165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:14:58.222180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:14:58.222201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.239 [2024-12-07 08:14:58.222216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:14:58.222236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.239 [2024-12-07 08:14:58.222293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:14:58.222317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:14:58.222334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:14:58.222356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.239 [2024-12-07 08:14:58.222377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:14:58.223448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.239 [2024-12-07 08:14:58.223478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:15:04.729197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.239 [2024-12-07 08:15:04.729299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:15:04.729360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:15:04.729382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:15:04.729407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:15:04.729423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:15:04.729445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.239 [2024-12-07 08:15:04.729460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:15:04.729482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.239 [2024-12-07 08:15:04.729497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:15:04.729519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:15:04.729534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:15:04.729555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:15:04.729571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:15:04.729614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:15:04.729644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:15:04.729663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.239 [2024-12-07 08:15:04.729676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:15:04.729723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:15:04.729742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:15:04.729764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.239 [2024-12-07 08:15:04.729780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:15:04.729801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:15:04.729817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:15:04.729838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.239 [2024-12-07 08:15:04.729872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:34.239 [2024-12-07 08:15:04.729896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.240 [2024-12-07 08:15:04.729912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.729934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.240 [2024-12-07 08:15:04.729949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.729970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.240 [2024-12-07 08:15:04.729986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.730022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.240 [2024-12-07 08:15:04.730051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.730071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.240 [2024-12-07 08:15:04.730085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.730104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.240 [2024-12-07 08:15:04.730130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.730149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.240 [2024-12-07 08:15:04.730162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.730181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.240 [2024-12-07 08:15:04.730195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.730238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.240 [2024-12-07 08:15:04.730252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.730287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.240 [2024-12-07 08:15:04.730322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.730344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.240 [2024-12-07 08:15:04.730360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.730381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.240 [2024-12-07 08:15:04.730405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.730429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.240 [2024-12-07 08:15:04.730445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.730466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.240 [2024-12-07 08:15:04.730482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.731205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.240 [2024-12-07 08:15:04.731249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.731279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.240 [2024-12-07 08:15:04.731309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.731334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.240 [2024-12-07 08:15:04.731349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.731373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.240 [2024-12-07 08:15:04.731387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.731410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.240 [2024-12-07 08:15:04.731425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.731448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.240 [2024-12-07 08:15:04.731462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.731485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.240 [2024-12-07 08:15:04.731500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.731523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.240 [2024-12-07 08:15:04.731553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.731590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.240 [2024-12-07 08:15:04.731604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.731626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.240 [2024-12-07 08:15:04.731639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.731673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.240 [2024-12-07 08:15:04.731688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.731711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.240 [2024-12-07 08:15:04.731724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.731746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.240 [2024-12-07 08:15:04.731760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.731782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.240 [2024-12-07 08:15:04.731795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.731818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.240 [2024-12-07 08:15:04.731831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.731853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.240 [2024-12-07 08:15:04.731867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.731889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.240 [2024-12-07 08:15:04.731902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.731924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.240 [2024-12-07 08:15:04.731938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:34.240 [2024-12-07 08:15:04.731959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.240 [2024-12-07 08:15:04.731973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.731995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.732009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.732044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.732080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.732124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.732160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.732195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.241 [2024-12-07 08:15:04.732295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.732338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.241 [2024-12-07 08:15:04.732378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.732417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.732455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.241 [2024-12-07 08:15:04.732494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.732547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.732601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.732637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.732684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.732735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.732770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.732806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.732842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.732877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.732913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.732948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.732970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.241 [2024-12-07 08:15:04.732984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.733006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.733020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.733042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.241 [2024-12-07 08:15:04.733055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.733077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.241 [2024-12-07 08:15:04.733090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.733112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.733132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.733394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.241 [2024-12-07 08:15:04.733420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.733452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.733468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.733496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.733512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.733539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.241 [2024-12-07 08:15:04.733554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:34.241 [2024-12-07 08:15:04.733611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.241 [2024-12-07 08:15:04.733626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.733652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.242 [2024-12-07 08:15:04.733666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.733719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.242 [2024-12-07 08:15:04.733754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.733799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.242 [2024-12-07 08:15:04.733814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.733841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.242 [2024-12-07 08:15:04.733856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.733884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.242 [2024-12-07 08:15:04.733899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.733926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.242 [2024-12-07 08:15:04.733941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.733969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.242 [2024-12-07 08:15:04.733993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.242 [2024-12-07 08:15:04.734069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.242 [2024-12-07 08:15:04.734110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.242 [2024-12-07 08:15:04.734164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.242 [2024-12-07 08:15:04.734203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.242 [2024-12-07 08:15:04.734258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:94616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.242 [2024-12-07 08:15:04.734299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.242 [2024-12-07 08:15:04.734355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.242 [2024-12-07 08:15:04.734395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.242 [2024-12-07 08:15:04.734435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.242 [2024-12-07 08:15:04.734475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.242 [2024-12-07 08:15:04.734515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.242 [2024-12-07 08:15:04.734570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.242 [2024-12-07 08:15:04.734619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.242 [2024-12-07 08:15:04.734658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.242 [2024-12-07 08:15:04.734697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.242 [2024-12-07 08:15:04.734736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.242 [2024-12-07 08:15:04.734775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.242 [2024-12-07 08:15:04.734814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.242 [2024-12-07 08:15:04.734853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.242 [2024-12-07 08:15:04.734892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.242 [2024-12-07 08:15:04.734932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.734960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.242 [2024-12-07 08:15:04.734974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:34.242 [2024-12-07 08:15:04.735000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.242 [2024-12-07 08:15:04.735014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:04.735040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:04.735054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.764075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.243 [2024-12-07 08:15:11.764137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.764224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.764276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.764301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.243 [2024-12-07 08:15:11.764318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.764339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.243 [2024-12-07 08:15:11.764355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.764377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.243 [2024-12-07 08:15:11.764392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.764413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.764428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.764450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.764465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.764486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.764500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.764522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.764537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.764558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.764588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.764638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.764651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.764670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.764684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.764703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.764738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.764761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.764775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.764794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.764808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.764827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.764840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.764859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.764873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.764892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.764905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.764926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.764940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.764961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.764992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.765012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.765026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.765046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.243 [2024-12-07 08:15:11.765060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.765080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.243 [2024-12-07 08:15:11.765093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.765113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.243 [2024-12-07 08:15:11.765127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.765147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.243 [2024-12-07 08:15:11.765170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.765192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.765223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.765797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.243 [2024-12-07 08:15:11.765826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.765857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.243 [2024-12-07 08:15:11.765874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.765899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:114256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.765915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.765940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.765956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.765981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.243 [2024-12-07 08:15:11.765996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.766037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.243 [2024-12-07 08:15:11.766052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.766075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.243 [2024-12-07 08:15:11.766090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.766128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.766141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.766165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.243 [2024-12-07 08:15:11.766179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.766201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:114312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.766243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:34.243 [2024-12-07 08:15:11.766282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.243 [2024-12-07 08:15:11.766303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.766340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.244 [2024-12-07 08:15:11.766357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.766382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.766398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.766422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.244 [2024-12-07 08:15:11.766438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.766462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.244 [2024-12-07 08:15:11.766478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.766502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.766517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.766572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.244 [2024-12-07 08:15:11.766586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.766608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.766622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.766645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.244 [2024-12-07 08:15:11.766659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.766681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:114392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.766695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.766718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.244 [2024-12-07 08:15:11.766732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.766755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.766770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.766793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.766807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.766838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.766853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.766877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.766891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.766914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.766928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.766951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.244 [2024-12-07 08:15:11.766965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.766988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:114456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.767002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.767039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.767076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.767112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.767148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.767185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.767265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.767309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.767358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.767398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.767444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.767485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.767524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.767584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.767652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.767688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.767724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.767761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.244 [2024-12-07 08:15:11.767797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.244 [2024-12-07 08:15:11.767834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.244 [2024-12-07 08:15:11.767880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.244 [2024-12-07 08:15:11.767918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:34.244 [2024-12-07 08:15:11.767941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.245 [2024-12-07 08:15:11.767955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.767977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.245 [2024-12-07 08:15:11.767991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.768148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.245 [2024-12-07 08:15:11.768172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.768201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.245 [2024-12-07 08:15:11.768250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.768293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.245 [2024-12-07 08:15:11.768313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.768343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.245 [2024-12-07 08:15:11.768359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.768387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.245 [2024-12-07 08:15:11.768402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.768432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.245 [2024-12-07 08:15:11.768447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.768476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.245 [2024-12-07 08:15:11.768491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.768519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.245 [2024-12-07 08:15:11.768535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.768578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.245 [2024-12-07 08:15:11.768632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.768660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.245 [2024-12-07 08:15:11.768674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.768701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.245 [2024-12-07 08:15:11.768715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.768741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.245 [2024-12-07 08:15:11.768755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.768781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.245 [2024-12-07 08:15:11.768796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.768821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.245 [2024-12-07 08:15:11.768835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.768861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.245 [2024-12-07 08:15:11.768875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.768901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.245 [2024-12-07 08:15:11.768915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.768941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.245 [2024-12-07 08:15:11.768955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.768981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.245 [2024-12-07 08:15:11.768995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.769022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.245 [2024-12-07 08:15:11.769036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.769063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.245 [2024-12-07 08:15:11.769077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.769103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.245 [2024-12-07 08:15:11.769124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.769152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.245 [2024-12-07 08:15:11.769166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.769192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.245 [2024-12-07 08:15:11.769223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.769268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.245 [2024-12-07 08:15:11.769296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.769325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.245 [2024-12-07 08:15:11.769341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.769370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.245 [2024-12-07 08:15:11.769385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.769414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.245 [2024-12-07 08:15:11.769429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.769457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.245 [2024-12-07 08:15:11.769473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.769501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.245 [2024-12-07 08:15:11.769517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.769545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.245 [2024-12-07 08:15:11.769576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.769629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.245 [2024-12-07 08:15:11.769643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.769669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.245 [2024-12-07 08:15:11.769684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.769737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.245 [2024-12-07 08:15:11.769755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.769803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.245 [2024-12-07 08:15:11.769821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.769850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.245 [2024-12-07 08:15:11.769866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.769895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.245 [2024-12-07 08:15:11.769910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:34.245 [2024-12-07 08:15:11.769938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:114120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.245 [2024-12-07 08:15:11.769954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:11.769982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:11.769997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:11.770037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:11.770063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.017464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.017508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.017534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.017550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.017566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.017592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.017607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.017620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.017634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.017647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.017662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.017675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.017690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.017748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.017767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.017781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.017796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.017809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.017824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.017837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.017852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.017865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.017880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.017894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.017909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.017922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.017938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.017951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.017967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.017980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.018006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.018030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.018045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.018058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.018073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.018086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.018100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.018113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.018135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.018149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.018163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.018176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.018190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.018215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.018242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.018266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.018286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.018300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.018316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.018329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.018344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.018358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.018373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.018387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.018402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.018416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.018431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.018445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.018460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.018474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.018490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.018503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.018519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.018585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.018601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.246 [2024-12-07 08:15:25.018614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.246 [2024-12-07 08:15:25.018628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.018640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.018654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.247 [2024-12-07 08:15:25.018667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.018681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.018693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.018707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.247 [2024-12-07 08:15:25.018720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.018733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.247 [2024-12-07 08:15:25.018746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.018760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.247 [2024-12-07 08:15:25.018772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.018786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.247 [2024-12-07 08:15:25.018799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.018812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.247 [2024-12-07 08:15:25.018825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.018839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.018851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.018865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.018878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.018892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.018904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.018924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.018938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.018951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.018964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.018979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.018991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.019034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.019062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.019090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.247 [2024-12-07 08:15:25.019117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.247 [2024-12-07 08:15:25.019144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.247 [2024-12-07 08:15:25.019171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.247 [2024-12-07 08:15:25.019198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.247 [2024-12-07 08:15:25.019243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.019282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.247 [2024-12-07 08:15:25.019320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.019350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.247 [2024-12-07 08:15:25.019379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.019408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.247 [2024-12-07 08:15:25.019437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.019466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.247 [2024-12-07 08:15:25.019496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.247 [2024-12-07 08:15:25.019525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.019583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.019626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.019652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.019678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.019705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.019736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.019764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.019790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.247 [2024-12-07 08:15:25.019816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.247 [2024-12-07 08:15:25.019830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.019842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.019856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.248 [2024-12-07 08:15:25.019868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.019882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.019895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.019909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.019921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.019935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.248 [2024-12-07 08:15:25.019948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.019962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.019975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.019988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.248 [2024-12-07 08:15:25.020249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.248 [2024-12-07 08:15:25.020288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.248 [2024-12-07 08:15:25.020346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.248 [2024-12-07 08:15:25.020431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.248 [2024-12-07 08:15:25.020512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.248 [2024-12-07 08:15:25.020582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.248 [2024-12-07 08:15:25.020869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.248 [2024-12-07 08:15:25.020895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.248 [2024-12-07 08:15:25.020922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.248 [2024-12-07 08:15:25.020947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.248 [2024-12-07 08:15:25.020987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.248 [2024-12-07 08:15:25.020999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.249 [2024-12-07 08:15:25.021013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.249 [2024-12-07 08:15:25.021025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.249 [2024-12-07 08:15:25.021039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.249 [2024-12-07 08:15:25.021051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.249 [2024-12-07 08:15:25.021065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.249 [2024-12-07 08:15:25.021077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.249 [2024-12-07 08:15:25.021091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.249 [2024-12-07 08:15:25.021103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.249 [2024-12-07 08:15:25.021117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.249 [2024-12-07 08:15:25.021129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.249 [2024-12-07 08:15:25.021143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.249 [2024-12-07 08:15:25.021155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.249 [2024-12-07 08:15:25.021168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.249 [2024-12-07 08:15:25.021180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.249 [2024-12-07 08:15:25.021200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.249 [2024-12-07 08:15:25.021245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.249 [2024-12-07 08:15:25.021278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.249 [2024-12-07 08:15:25.021295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.249 [2024-12-07 08:15:25.021310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.249 [2024-12-07 08:15:25.021324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.249 [2024-12-07 08:15:25.021339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.249 [2024-12-07 08:15:25.021353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.249 [2024-12-07 08:15:25.021374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.249 [2024-12-07 08:15:25.021388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.249 [2024-12-07 08:15:25.021404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.249 [2024-12-07 08:15:25.021417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.249 [2024-12-07 08:15:25.021433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:34.249 [2024-12-07 08:15:25.021446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.249 [2024-12-07 08:15:25.021461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2045060 is same with the state(5) to be set 00:24:34.249 [2024-12-07 08:15:25.021478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.249 [2024-12-07 08:15:25.021489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.249 [2024-12-07 08:15:25.021500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20616 len:8 PRP1 0x0 PRP2 0x0 00:24:34.249 [2024-12-07 08:15:25.021513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.249 [2024-12-07 08:15:25.021583] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2045060 was disconnected and freed. reset controller. 00:24:34.249 [2024-12-07 08:15:25.022924] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.249 [2024-12-07 08:15:25.023001] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2056a00 (9): Bad file descriptor 00:24:34.249 [2024-12-07 08:15:25.023118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.249 [2024-12-07 08:15:25.023171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.249 [2024-12-07 08:15:25.023192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2056a00 with addr=10.0.0.2, port=4421 00:24:34.249 [2024-12-07 08:15:25.023206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2056a00 is same with the state(5) to be set 00:24:34.249 [2024-12-07 08:15:25.023260] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2056a00 (9): Bad file descriptor 00:24:34.249 [2024-12-07 08:15:25.023300] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:34.249 [2024-12-07 08:15:25.023317] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:34.249 [2024-12-07 08:15:25.023331] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.249 [2024-12-07 08:15:25.023354] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:34.249 [2024-12-07 08:15:25.023368] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.249 [2024-12-07 08:15:35.069273] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:34.249 Received shutdown signal, test time was about 55.274757 seconds 00:24:34.249 00:24:34.249 Latency(us) 00:24:34.249 [2024-12-07T08:15:45.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.249 [2024-12-07T08:15:45.525Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:34.249 Verification LBA range: start 0x0 length 0x4000 00:24:34.249 Nvme0n1 : 55.27 11993.37 46.85 0.00 0.00 10655.34 860.16 7015926.69 00:24:34.249 [2024-12-07T08:15:45.525Z] =================================================================================================================== 00:24:34.249 [2024-12-07T08:15:45.525Z] Total : 11993.37 46.85 0.00 0.00 10655.34 860.16 7015926.69 00:24:34.249 08:15:45 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:34.506 08:15:45 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:34.506 08:15:45 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:34.506 08:15:45 -- host/multipath.sh@125 -- # nvmftestfini 00:24:34.506 08:15:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:34.506 08:15:45 -- nvmf/common.sh@116 -- # sync 00:24:34.506 08:15:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:34.506 08:15:45 -- nvmf/common.sh@119 -- # set +e 00:24:34.506 08:15:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:34.506 08:15:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:34.506 rmmod nvme_tcp 00:24:34.765 rmmod nvme_fabrics 00:24:34.765 rmmod nvme_keyring 00:24:34.765 08:15:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:34.765 08:15:45 -- nvmf/common.sh@123 -- # set -e 00:24:34.765 08:15:45 -- nvmf/common.sh@124 -- # return 0 00:24:34.765 08:15:45 -- nvmf/common.sh@477 -- # '[' -n 99014 ']' 00:24:34.765 08:15:45 -- nvmf/common.sh@478 -- # killprocess 99014 00:24:34.765 08:15:45 -- common/autotest_common.sh@936 -- # '[' -z 99014 ']' 00:24:34.765 08:15:45 -- common/autotest_common.sh@940 -- # kill -0 99014 00:24:34.765 08:15:45 -- common/autotest_common.sh@941 -- # uname 00:24:34.765 08:15:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:34.765 08:15:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99014 00:24:34.765 08:15:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:34.765 08:15:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:34.765 08:15:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99014' 00:24:34.765 killing process with pid 99014 00:24:34.765 08:15:45 -- common/autotest_common.sh@955 -- # kill 99014 00:24:34.765 08:15:45 -- common/autotest_common.sh@960 -- # wait 99014 00:24:35.023 08:15:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:35.023 08:15:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:35.023 08:15:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:35.023 08:15:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:35.023 08:15:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:35.023 08:15:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.023 08:15:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.023 08:15:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.023 08:15:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:35.023 00:24:35.023 real 1m1.555s 00:24:35.023 user 2m54.172s 00:24:35.023 sys 0m13.542s 00:24:35.023 08:15:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:35.023 08:15:46 -- common/autotest_common.sh@10 -- # set +x 00:24:35.023 ************************************ 00:24:35.023 END TEST nvmf_multipath 00:24:35.023 ************************************ 00:24:35.023 08:15:46 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:35.023 08:15:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:35.023 08:15:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:35.023 08:15:46 -- common/autotest_common.sh@10 -- # set +x 00:24:35.023 ************************************ 00:24:35.023 START TEST nvmf_timeout 00:24:35.023 ************************************ 00:24:35.024 08:15:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:35.024 * Looking for test storage... 00:24:35.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:35.024 08:15:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:35.024 08:15:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:35.024 08:15:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:35.282 08:15:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:35.282 08:15:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:35.282 08:15:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:35.282 08:15:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:35.282 08:15:46 -- scripts/common.sh@335 -- # IFS=.-: 00:24:35.282 08:15:46 -- scripts/common.sh@335 -- # read -ra ver1 00:24:35.282 08:15:46 -- scripts/common.sh@336 -- # IFS=.-: 00:24:35.282 08:15:46 -- scripts/common.sh@336 -- # read -ra ver2 00:24:35.282 08:15:46 -- scripts/common.sh@337 -- # local 'op=<' 00:24:35.282 08:15:46 -- scripts/common.sh@339 -- # ver1_l=2 00:24:35.282 08:15:46 -- scripts/common.sh@340 -- # ver2_l=1 00:24:35.282 08:15:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:35.282 08:15:46 -- scripts/common.sh@343 -- # case "$op" in 00:24:35.282 08:15:46 -- scripts/common.sh@344 -- # : 1 00:24:35.282 08:15:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:35.283 08:15:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:35.283 08:15:46 -- scripts/common.sh@364 -- # decimal 1 00:24:35.283 08:15:46 -- scripts/common.sh@352 -- # local d=1 00:24:35.283 08:15:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:35.283 08:15:46 -- scripts/common.sh@354 -- # echo 1 00:24:35.283 08:15:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:35.283 08:15:46 -- scripts/common.sh@365 -- # decimal 2 00:24:35.283 08:15:46 -- scripts/common.sh@352 -- # local d=2 00:24:35.283 08:15:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:35.283 08:15:46 -- scripts/common.sh@354 -- # echo 2 00:24:35.283 08:15:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:35.283 08:15:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:35.283 08:15:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:35.283 08:15:46 -- scripts/common.sh@367 -- # return 0 00:24:35.283 08:15:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:35.283 08:15:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:35.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.283 --rc genhtml_branch_coverage=1 00:24:35.283 --rc genhtml_function_coverage=1 00:24:35.283 --rc genhtml_legend=1 00:24:35.283 --rc geninfo_all_blocks=1 00:24:35.283 --rc geninfo_unexecuted_blocks=1 00:24:35.283 00:24:35.283 ' 00:24:35.283 08:15:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:35.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.283 --rc genhtml_branch_coverage=1 00:24:35.283 --rc genhtml_function_coverage=1 00:24:35.283 --rc genhtml_legend=1 00:24:35.283 --rc geninfo_all_blocks=1 00:24:35.283 --rc geninfo_unexecuted_blocks=1 00:24:35.283 00:24:35.283 ' 00:24:35.283 08:15:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:35.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.283 --rc genhtml_branch_coverage=1 00:24:35.283 --rc genhtml_function_coverage=1 00:24:35.283 --rc genhtml_legend=1 00:24:35.283 --rc geninfo_all_blocks=1 00:24:35.283 --rc geninfo_unexecuted_blocks=1 00:24:35.283 00:24:35.283 ' 00:24:35.283 08:15:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:35.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.283 --rc genhtml_branch_coverage=1 00:24:35.283 --rc genhtml_function_coverage=1 00:24:35.283 --rc genhtml_legend=1 00:24:35.283 --rc geninfo_all_blocks=1 00:24:35.283 --rc geninfo_unexecuted_blocks=1 00:24:35.283 00:24:35.283 ' 00:24:35.283 08:15:46 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:35.283 08:15:46 -- nvmf/common.sh@7 -- # uname -s 00:24:35.283 08:15:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.283 08:15:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.283 08:15:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.283 08:15:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.283 08:15:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.283 08:15:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.283 08:15:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.283 08:15:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.283 08:15:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.283 08:15:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.283 08:15:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:24:35.283 08:15:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:24:35.283 08:15:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.283 08:15:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.283 08:15:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:35.283 08:15:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:35.283 08:15:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.283 08:15:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.283 08:15:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.283 08:15:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.283 08:15:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.283 08:15:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.283 08:15:46 -- paths/export.sh@5 -- # export PATH 00:24:35.283 08:15:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.283 08:15:46 -- nvmf/common.sh@46 -- # : 0 00:24:35.283 08:15:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:35.283 08:15:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:35.283 08:15:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:35.283 08:15:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.283 08:15:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.283 08:15:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:35.283 08:15:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:35.283 08:15:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:35.283 08:15:46 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:35.283 08:15:46 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:35.283 08:15:46 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:35.283 08:15:46 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:35.283 08:15:46 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:35.283 08:15:46 -- host/timeout.sh@19 -- # nvmftestinit 00:24:35.283 08:15:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:35.283 08:15:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.283 08:15:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:35.283 08:15:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:35.283 08:15:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:35.283 08:15:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.283 08:15:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.283 08:15:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.283 08:15:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:35.283 08:15:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:35.283 08:15:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:35.283 08:15:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:35.283 08:15:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:35.283 08:15:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:35.283 08:15:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.283 08:15:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.283 08:15:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:35.283 08:15:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:35.283 08:15:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:35.283 08:15:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:35.283 08:15:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:35.283 08:15:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.283 08:15:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:35.283 08:15:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:35.283 08:15:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:35.283 08:15:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:35.283 08:15:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:35.283 08:15:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:35.283 Cannot find device "nvmf_tgt_br" 00:24:35.283 08:15:46 -- nvmf/common.sh@154 -- # true 00:24:35.283 08:15:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:35.283 Cannot find device "nvmf_tgt_br2" 00:24:35.283 08:15:46 -- nvmf/common.sh@155 -- # true 00:24:35.283 08:15:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:35.283 08:15:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:35.283 Cannot find device "nvmf_tgt_br" 00:24:35.283 08:15:46 -- nvmf/common.sh@157 -- # true 00:24:35.283 08:15:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:35.283 Cannot find device "nvmf_tgt_br2" 00:24:35.283 08:15:46 -- nvmf/common.sh@158 -- # true 00:24:35.283 08:15:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:35.283 08:15:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:35.283 08:15:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:35.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:35.283 08:15:46 -- nvmf/common.sh@161 -- # true 00:24:35.283 08:15:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:35.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:35.283 08:15:46 -- nvmf/common.sh@162 -- # true 00:24:35.283 08:15:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:35.283 08:15:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:35.283 08:15:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:35.284 08:15:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:35.284 08:15:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:35.284 08:15:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:35.284 08:15:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:35.284 08:15:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:35.542 08:15:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:35.542 08:15:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:35.542 08:15:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:35.542 08:15:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:35.542 08:15:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:35.542 08:15:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:35.542 08:15:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:35.542 08:15:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:35.542 08:15:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:35.542 08:15:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:35.542 08:15:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:35.542 08:15:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:35.542 08:15:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:35.542 08:15:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:35.542 08:15:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:35.542 08:15:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:35.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:24:35.542 00:24:35.542 --- 10.0.0.2 ping statistics --- 00:24:35.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.542 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:24:35.542 08:15:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:35.542 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:35.542 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:24:35.542 00:24:35.542 --- 10.0.0.3 ping statistics --- 00:24:35.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.542 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:24:35.542 08:15:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:35.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:24:35.542 00:24:35.542 --- 10.0.0.1 ping statistics --- 00:24:35.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.542 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:24:35.542 08:15:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.542 08:15:46 -- nvmf/common.sh@421 -- # return 0 00:24:35.542 08:15:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:35.542 08:15:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.542 08:15:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:35.542 08:15:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:35.542 08:15:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.542 08:15:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:35.542 08:15:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:35.543 08:15:46 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:35.543 08:15:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:35.543 08:15:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:35.543 08:15:46 -- common/autotest_common.sh@10 -- # set +x 00:24:35.543 08:15:46 -- nvmf/common.sh@469 -- # nvmfpid=100391 00:24:35.543 08:15:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:35.543 08:15:46 -- nvmf/common.sh@470 -- # waitforlisten 100391 00:24:35.543 08:15:46 -- common/autotest_common.sh@829 -- # '[' -z 100391 ']' 00:24:35.543 08:15:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.543 08:15:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:35.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.543 08:15:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.543 08:15:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:35.543 08:15:46 -- common/autotest_common.sh@10 -- # set +x 00:24:35.543 [2024-12-07 08:15:46.748404] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:35.543 [2024-12-07 08:15:46.748498] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.801 [2024-12-07 08:15:46.880905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:35.801 [2024-12-07 08:15:46.959056] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:35.801 [2024-12-07 08:15:46.959227] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.801 [2024-12-07 08:15:46.959241] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.801 [2024-12-07 08:15:46.959249] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.801 [2024-12-07 08:15:46.959365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.801 [2024-12-07 08:15:46.959377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.737 08:15:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.737 08:15:47 -- common/autotest_common.sh@862 -- # return 0 00:24:36.737 08:15:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:36.737 08:15:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:36.737 08:15:47 -- common/autotest_common.sh@10 -- # set +x 00:24:36.737 08:15:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.737 08:15:47 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:36.737 08:15:47 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:36.997 [2024-12-07 08:15:48.038220] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.997 08:15:48 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:37.256 Malloc0 00:24:37.256 08:15:48 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:37.514 08:15:48 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:37.772 08:15:48 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.772 [2024-12-07 08:15:49.022277] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:38.031 08:15:49 -- host/timeout.sh@32 -- # bdevperf_pid=100483 00:24:38.031 08:15:49 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:38.031 08:15:49 -- host/timeout.sh@34 -- # waitforlisten 100483 /var/tmp/bdevperf.sock 00:24:38.031 08:15:49 -- common/autotest_common.sh@829 -- # '[' -z 100483 ']' 00:24:38.031 08:15:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:38.031 08:15:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:38.031 08:15:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:38.031 08:15:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:38.031 08:15:49 -- common/autotest_common.sh@10 -- # set +x 00:24:38.031 [2024-12-07 08:15:49.099294] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:38.031 [2024-12-07 08:15:49.099444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100483 ] 00:24:38.031 [2024-12-07 08:15:49.246414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.290 [2024-12-07 08:15:49.326421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.227 08:15:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:39.227 08:15:50 -- common/autotest_common.sh@862 -- # return 0 00:24:39.227 08:15:50 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:39.227 08:15:50 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:39.807 NVMe0n1 00:24:39.807 08:15:50 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:39.807 08:15:50 -- host/timeout.sh@51 -- # rpc_pid=100531 00:24:39.807 08:15:50 -- host/timeout.sh@53 -- # sleep 1 00:24:39.807 Running I/O for 10 seconds... 00:24:40.747 08:15:51 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.748 [2024-12-07 08:15:52.010689] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010755] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010765] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010773] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010780] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010803] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010817] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010824] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010838] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010846] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010866] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010873] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010880] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010887] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010894] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010901] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010915] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010943] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010950] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010957] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010964] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010979] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010986] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.010993] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.011001] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.011008] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.011015] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.011022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7e490 is same with the state(5) to be set 00:24:40.748 [2024-12-07 08:15:52.011309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.748 [2024-12-07 08:15:52.011340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.748 [2024-12-07 08:15:52.011361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.748 [2024-12-07 08:15:52.011372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.748 [2024-12-07 08:15:52.011384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.748 [2024-12-07 08:15:52.011393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.748 [2024-12-07 08:15:52.011404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.748 [2024-12-07 08:15:52.011413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.748 [2024-12-07 08:15:52.011424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.748 [2024-12-07 08:15:52.011433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.748 [2024-12-07 08:15:52.011444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.748 [2024-12-07 08:15:52.011453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.748 [2024-12-07 08:15:52.011464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.748 [2024-12-07 08:15:52.011473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.748 [2024-12-07 08:15:52.011484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.748 [2024-12-07 08:15:52.011494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.748 [2024-12-07 08:15:52.011505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.748 [2024-12-07 08:15:52.011514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.748 [2024-12-07 08:15:52.011525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.748 [2024-12-07 08:15:52.011534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.748 [2024-12-07 08:15:52.011545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.748 [2024-12-07 08:15:52.011565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.748 [2024-12-07 08:15:52.011591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.748 [2024-12-07 08:15:52.011614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.748 [2024-12-07 08:15:52.011639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.748 [2024-12-07 08:15:52.011647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.748 [2024-12-07 08:15:52.011656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.748 [2024-12-07 08:15:52.011663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.748 [2024-12-07 08:15:52.011672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.748 [2024-12-07 08:15:52.011680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.748 [2024-12-07 08:15:52.011691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.748 [2024-12-07 08:15:52.011699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.748 [2024-12-07 08:15:52.011708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.748 [2024-12-07 08:15:52.011717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.748 [2024-12-07 08:15:52.011727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.748 [2024-12-07 08:15:52.011735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.748 [2024-12-07 08:15:52.011745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.748 [2024-12-07 08:15:52.011752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.748 [2024-12-07 08:15:52.011762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.748 [2024-12-07 08:15:52.011769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.748 [2024-12-07 08:15:52.011779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.011787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.011796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.011804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.011814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.011822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.011831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.011839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.011849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.011857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.011866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.011874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.011883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.011891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.011900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.011908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.011918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.011927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.011936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.011944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.011954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.011962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.011971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.011980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.011989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.011998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.012015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.012033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.012050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.012067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.012084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.012101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.012119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.749 [2024-12-07 08:15:52.012136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.012153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.012170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.749 [2024-12-07 08:15:52.012187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.012222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.749 [2024-12-07 08:15:52.012258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.749 [2024-12-07 08:15:52.012288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.749 [2024-12-07 08:15:52.012311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.749 [2024-12-07 08:15:52.012332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.749 [2024-12-07 08:15:52.012352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.749 [2024-12-07 08:15:52.012373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.012393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.749 [2024-12-07 08:15:52.012413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.012434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.012454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.012473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.012493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.012513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.012533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.749 [2024-12-07 08:15:52.012568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.749 [2024-12-07 08:15:52.012579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.012602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.012643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.012651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.012660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.750 [2024-12-07 08:15:52.012668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.012678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.012686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.012696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.750 [2024-12-07 08:15:52.012704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.012713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.012721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.012730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.012738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.012748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.012755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.012764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.012772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.012782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.750 [2024-12-07 08:15:52.012790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.012800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.012808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.012817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.012825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.012834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.012843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.012852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.012860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.012869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.012876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.012886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.012893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.012902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.012910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.012919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.012927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.012936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.012943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.012959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.012967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.012977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.750 [2024-12-07 08:15:52.012985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.012995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.750 [2024-12-07 08:15:52.013002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.013012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.013020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.013030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.013037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.013047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.013055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.013064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.750 [2024-12-07 08:15:52.013077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.013095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.013103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.013112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.013120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.013129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.750 [2024-12-07 08:15:52.013137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.013146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.013153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.013163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.750 [2024-12-07 08:15:52.013170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.013179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.750 [2024-12-07 08:15:52.013187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.013219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.013246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.013257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.013266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.013277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.013285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.013305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.013314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.013325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.013334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.013345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.013353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.013364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.750 [2024-12-07 08:15:52.013373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.750 [2024-12-07 08:15:52.013384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.751 [2024-12-07 08:15:52.013393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.751 [2024-12-07 08:15:52.013412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.751 [2024-12-07 08:15:52.013436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.751 [2024-12-07 08:15:52.013456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.751 [2024-12-07 08:15:52.013476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.751 [2024-12-07 08:15:52.013495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.751 [2024-12-07 08:15:52.013515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.751 [2024-12-07 08:15:52.013535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.751 [2024-12-07 08:15:52.013555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.751 [2024-12-07 08:15:52.013575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.751 [2024-12-07 08:15:52.013617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.751 [2024-12-07 08:15:52.013635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.751 [2024-12-07 08:15:52.013658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.751 [2024-12-07 08:15:52.013677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.751 [2024-12-07 08:15:52.013720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.751 [2024-12-07 08:15:52.013757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.751 [2024-12-07 08:15:52.013777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.751 [2024-12-07 08:15:52.013796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.751 [2024-12-07 08:15:52.013821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.751 [2024-12-07 08:15:52.013841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.751 [2024-12-07 08:15:52.013860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.751 [2024-12-07 08:15:52.013880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.751 [2024-12-07 08:15:52.013900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.751 [2024-12-07 08:15:52.013920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.751 [2024-12-07 08:15:52.013940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.751 [2024-12-07 08:15:52.013959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.751 [2024-12-07 08:15:52.013979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.013990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.751 [2024-12-07 08:15:52.013998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.014013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1440780 is same with the state(5) to be set 00:24:40.751 [2024-12-07 08:15:52.014025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.751 [2024-12-07 08:15:52.014032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.751 [2024-12-07 08:15:52.014041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1712 len:8 PRP1 0x0 PRP2 0x0 00:24:40.751 [2024-12-07 08:15:52.014049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.751 [2024-12-07 08:15:52.014131] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1440780 was disconnected and freed. reset controller. 00:24:40.751 [2024-12-07 08:15:52.014372] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.751 [2024-12-07 08:15:52.014445] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bb8c0 (9): Bad file descriptor 00:24:40.751 [2024-12-07 08:15:52.014543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.751 [2024-12-07 08:15:52.014619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.751 [2024-12-07 08:15:52.014635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13bb8c0 with addr=10.0.0.2, port=4420 00:24:40.751 [2024-12-07 08:15:52.014646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bb8c0 is same with the state(5) to be set 00:24:40.751 [2024-12-07 08:15:52.014663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bb8c0 (9): Bad file descriptor 00:24:40.751 [2024-12-07 08:15:52.014684] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.751 [2024-12-07 08:15:52.014693] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.751 [2024-12-07 08:15:52.014703] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.751 [2024-12-07 08:15:52.014722] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.751 [2024-12-07 08:15:52.014733] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.010 08:15:52 -- host/timeout.sh@56 -- # sleep 2 00:24:42.915 [2024-12-07 08:15:54.014818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.915 [2024-12-07 08:15:54.014902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.915 [2024-12-07 08:15:54.014919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13bb8c0 with addr=10.0.0.2, port=4420 00:24:42.915 [2024-12-07 08:15:54.014931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bb8c0 is same with the state(5) to be set 00:24:42.915 [2024-12-07 08:15:54.014953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bb8c0 (9): Bad file descriptor 00:24:42.915 [2024-12-07 08:15:54.014969] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.915 [2024-12-07 08:15:54.014978] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.915 [2024-12-07 08:15:54.014987] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.915 [2024-12-07 08:15:54.015011] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.915 [2024-12-07 08:15:54.015022] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.915 08:15:54 -- host/timeout.sh@57 -- # get_controller 00:24:42.915 08:15:54 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:42.915 08:15:54 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:43.174 08:15:54 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:43.174 08:15:54 -- host/timeout.sh@58 -- # get_bdev 00:24:43.174 08:15:54 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:43.174 08:15:54 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:43.433 08:15:54 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:43.433 08:15:54 -- host/timeout.sh@61 -- # sleep 5 00:24:44.817 [2024-12-07 08:15:56.015148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-12-07 08:15:56.015257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.817 [2024-12-07 08:15:56.015276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13bb8c0 with addr=10.0.0.2, port=4420 00:24:44.817 [2024-12-07 08:15:56.015288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bb8c0 is same with the state(5) to be set 00:24:44.817 [2024-12-07 08:15:56.015312] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bb8c0 (9): Bad file descriptor 00:24:44.817 [2024-12-07 08:15:56.015329] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.817 [2024-12-07 08:15:56.015338] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.817 [2024-12-07 08:15:56.015348] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.817 [2024-12-07 08:15:56.015372] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.817 [2024-12-07 08:15:56.015383] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.809 [2024-12-07 08:15:58.015436] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.809 [2024-12-07 08:15:58.015491] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.809 [2024-12-07 08:15:58.015501] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.809 [2024-12-07 08:15:58.015510] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:46.809 [2024-12-07 08:15:58.015534] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:47.744 00:24:47.744 Latency(us) 00:24:47.744 [2024-12-07T08:15:59.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.744 [2024-12-07T08:15:59.020Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:47.744 Verification LBA range: start 0x0 length 0x4000 00:24:47.744 NVMe0n1 : 8.13 2033.54 7.94 15.75 0.00 62376.33 2561.86 7015926.69 00:24:47.744 [2024-12-07T08:15:59.020Z] =================================================================================================================== 00:24:47.744 [2024-12-07T08:15:59.020Z] Total : 2033.54 7.94 15.75 0.00 62376.33 2561.86 7015926.69 00:24:48.003 0 00:24:48.569 08:15:59 -- host/timeout.sh@62 -- # get_controller 00:24:48.569 08:15:59 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:48.569 08:15:59 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:48.569 08:15:59 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:48.569 08:15:59 -- host/timeout.sh@63 -- # get_bdev 00:24:48.569 08:15:59 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:48.569 08:15:59 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:49.132 08:16:00 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:49.132 08:16:00 -- host/timeout.sh@65 -- # wait 100531 00:24:49.132 08:16:00 -- host/timeout.sh@67 -- # killprocess 100483 00:24:49.132 08:16:00 -- common/autotest_common.sh@936 -- # '[' -z 100483 ']' 00:24:49.132 08:16:00 -- common/autotest_common.sh@940 -- # kill -0 100483 00:24:49.132 08:16:00 -- common/autotest_common.sh@941 -- # uname 00:24:49.132 08:16:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:49.132 08:16:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100483 00:24:49.132 08:16:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:49.132 killing process with pid 100483 00:24:49.132 08:16:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:49.132 08:16:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100483' 00:24:49.132 Received shutdown signal, test time was about 9.267425 seconds 00:24:49.132 00:24:49.132 Latency(us) 00:24:49.132 [2024-12-07T08:16:00.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.132 [2024-12-07T08:16:00.408Z] =================================================================================================================== 00:24:49.132 [2024-12-07T08:16:00.408Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:49.132 08:16:00 -- common/autotest_common.sh@955 -- # kill 100483 00:24:49.132 08:16:00 -- common/autotest_common.sh@960 -- # wait 100483 00:24:49.132 08:16:00 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:49.389 [2024-12-07 08:16:00.531371] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.389 08:16:00 -- host/timeout.sh@74 -- # bdevperf_pid=100690 00:24:49.389 08:16:00 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:49.389 08:16:00 -- host/timeout.sh@76 -- # waitforlisten 100690 /var/tmp/bdevperf.sock 00:24:49.389 08:16:00 -- common/autotest_common.sh@829 -- # '[' -z 100690 ']' 00:24:49.389 08:16:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:49.389 08:16:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:49.389 08:16:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:49.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:49.389 08:16:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:49.389 08:16:00 -- common/autotest_common.sh@10 -- # set +x 00:24:49.389 [2024-12-07 08:16:00.593000] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:49.389 [2024-12-07 08:16:00.593081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100690 ] 00:24:49.646 [2024-12-07 08:16:00.720849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.646 [2024-12-07 08:16:00.786343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.577 08:16:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:50.577 08:16:01 -- common/autotest_common.sh@862 -- # return 0 00:24:50.577 08:16:01 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:50.577 08:16:01 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:51.144 NVMe0n1 00:24:51.144 08:16:02 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:51.144 08:16:02 -- host/timeout.sh@84 -- # rpc_pid=100732 00:24:51.144 08:16:02 -- host/timeout.sh@86 -- # sleep 1 00:24:51.144 Running I/O for 10 seconds... 00:24:52.079 08:16:03 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:52.341 [2024-12-07 08:16:03.381555] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381624] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381648] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381657] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381665] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381672] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381680] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381688] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381696] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381703] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381711] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381727] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381761] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381776] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381784] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381792] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381801] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381809] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381817] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381825] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381833] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381841] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381849] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381857] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381865] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.381873] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023ca0 is same with the state(5) to be set 00:24:52.341 [2024-12-07 08:16:03.382116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.341 [2024-12-07 08:16:03.382155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.341 [2024-12-07 08:16:03.382180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.341 [2024-12-07 08:16:03.382190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.341 [2024-12-07 08:16:03.382215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.341 [2024-12-07 08:16:03.382226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.341 [2024-12-07 08:16:03.382237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.341 [2024-12-07 08:16:03.382246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.341 [2024-12-07 08:16:03.382257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.341 [2024-12-07 08:16:03.382267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.341 [2024-12-07 08:16:03.382278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:130440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.341 [2024-12-07 08:16:03.382288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.341 [2024-12-07 08:16:03.382299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.341 [2024-12-07 08:16:03.382308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.341 [2024-12-07 08:16:03.382319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.341 [2024-12-07 08:16:03.382327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.341 [2024-12-07 08:16:03.382339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.341 [2024-12-07 08:16:03.382348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.341 [2024-12-07 08:16:03.382359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.341 [2024-12-07 08:16:03.382368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.341 [2024-12-07 08:16:03.382379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.341 [2024-12-07 08:16:03.382387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.341 [2024-12-07 08:16:03.382398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.341 [2024-12-07 08:16:03.382407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.341 [2024-12-07 08:16:03.382418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.341 [2024-12-07 08:16:03.382428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.341 [2024-12-07 08:16:03.382439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.341 [2024-12-07 08:16:03.382448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.341 [2024-12-07 08:16:03.382459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.341 [2024-12-07 08:16:03.382468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.341 [2024-12-07 08:16:03.382479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.341 [2024-12-07 08:16:03.382488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.341 [2024-12-07 08:16:03.382499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.341 [2024-12-07 08:16:03.382510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.341 [2024-12-07 08:16:03.382521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.341 [2024-12-07 08:16:03.382530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.341 [2024-12-07 08:16:03.382541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.341 [2024-12-07 08:16:03.382551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.341 [2024-12-07 08:16:03.382562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.382571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.382582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.382591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.382603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.382612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.382623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.382631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.382642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.382651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.382662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.382671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.382681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.382690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.382702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.382711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.382722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.382731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.382742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.382750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.382761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.342 [2024-12-07 08:16:03.382770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.382781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.382790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.382801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.342 [2024-12-07 08:16:03.382811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.382822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.382832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.382844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.382853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.382865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.382874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.382885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.342 [2024-12-07 08:16:03.382894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.382905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.382915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.382926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.382935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.382947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.382956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.382967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.382976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.382987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.382996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.383008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.383017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.383028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:130232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.383037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.383048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.383057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.383069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.383078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.383088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.383098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.383109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:130744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.342 [2024-12-07 08:16:03.383118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.383129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.342 [2024-12-07 08:16:03.383138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.383149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.342 [2024-12-07 08:16:03.383159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.383170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.342 [2024-12-07 08:16:03.383179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.383190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.342 [2024-12-07 08:16:03.383209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.383222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.383230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.383242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.383251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.383263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.383272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.383283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.383292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.383303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.342 [2024-12-07 08:16:03.383313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.342 [2024-12-07 08:16:03.383324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.343 [2024-12-07 08:16:03.383333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.343 [2024-12-07 08:16:03.383353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.343 [2024-12-07 08:16:03.383373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.343 [2024-12-07 08:16:03.383393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.343 [2024-12-07 08:16:03.383413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.343 [2024-12-07 08:16:03.383434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:130808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.343 [2024-12-07 08:16:03.383454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.343 [2024-12-07 08:16:03.383474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.343 [2024-12-07 08:16:03.383494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.343 [2024-12-07 08:16:03.383515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.343 [2024-12-07 08:16:03.383535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.343 [2024-12-07 08:16:03.383556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.343 [2024-12-07 08:16:03.383576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.343 [2024-12-07 08:16:03.383596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.343 [2024-12-07 08:16:03.383616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.343 [2024-12-07 08:16:03.383636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.343 [2024-12-07 08:16:03.383656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.343 [2024-12-07 08:16:03.383676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.343 [2024-12-07 08:16:03.383696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.343 [2024-12-07 08:16:03.383716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.343 [2024-12-07 08:16:03.383736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.343 [2024-12-07 08:16:03.383756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.343 [2024-12-07 08:16:03.383776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.343 [2024-12-07 08:16:03.383797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.343 [2024-12-07 08:16:03.383817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.343 [2024-12-07 08:16:03.383837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.343 [2024-12-07 08:16:03.383857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.343 [2024-12-07 08:16:03.383885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.343 [2024-12-07 08:16:03.383906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.343 [2024-12-07 08:16:03.383926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.343 [2024-12-07 08:16:03.383946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.343 [2024-12-07 08:16:03.383966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.343 [2024-12-07 08:16:03.383986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.383997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.343 [2024-12-07 08:16:03.384006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.384017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.343 [2024-12-07 08:16:03.384026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.384037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.343 [2024-12-07 08:16:03.384047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.384057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.343 [2024-12-07 08:16:03.384066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.384077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.343 [2024-12-07 08:16:03.384086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.343 [2024-12-07 08:16:03.384097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.344 [2024-12-07 08:16:03.384226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.344 [2024-12-07 08:16:03.384246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.344 [2024-12-07 08:16:03.384307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.344 [2024-12-07 08:16:03.384348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.344 [2024-12-07 08:16:03.384368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.344 [2024-12-07 08:16:03.384390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.344 [2024-12-07 08:16:03.384431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:52.344 [2024-12-07 08:16:03.384511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:52.344 [2024-12-07 08:16:03.384800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfcb660 is same with the state(5) to be set 00:24:52.344 [2024-12-07 08:16:03.384822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:52.344 [2024-12-07 08:16:03.384830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:52.344 [2024-12-07 08:16:03.384838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130664 len:8 PRP1 0x0 PRP2 0x0 00:24:52.344 [2024-12-07 08:16:03.384847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.384899] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfcb660 was disconnected and freed. reset controller. 00:24:52.344 [2024-12-07 08:16:03.384984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.344 [2024-12-07 08:16:03.385001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.385017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.344 [2024-12-07 08:16:03.385026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.385036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.344 [2024-12-07 08:16:03.385050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.344 [2024-12-07 08:16:03.385060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:52.344 [2024-12-07 08:16:03.385068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:52.345 [2024-12-07 08:16:03.385077] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf468c0 is same with the state(5) to be set 00:24:52.345 [2024-12-07 08:16:03.385303] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.345 [2024-12-07 08:16:03.385331] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf468c0 (9): Bad file descriptor 00:24:52.345 [2024-12-07 08:16:03.385430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.345 [2024-12-07 08:16:03.385485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.345 [2024-12-07 08:16:03.385502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf468c0 with addr=10.0.0.2, port=4420 00:24:52.345 [2024-12-07 08:16:03.385512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf468c0 is same with the state(5) to be set 00:24:52.345 [2024-12-07 08:16:03.385530] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf468c0 (9): Bad file descriptor 00:24:52.345 [2024-12-07 08:16:03.385546] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.345 [2024-12-07 08:16:03.385555] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.345 [2024-12-07 08:16:03.385565] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.345 [2024-12-07 08:16:03.385585] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.345 [2024-12-07 08:16:03.385596] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:52.345 08:16:03 -- host/timeout.sh@90 -- # sleep 1 00:24:53.281 [2024-12-07 08:16:04.385710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-12-07 08:16:04.385864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.281 [2024-12-07 08:16:04.385883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf468c0 with addr=10.0.0.2, port=4420 00:24:53.281 [2024-12-07 08:16:04.385896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf468c0 is same with the state(5) to be set 00:24:53.281 [2024-12-07 08:16:04.385922] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf468c0 (9): Bad file descriptor 00:24:53.281 [2024-12-07 08:16:04.385940] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.281 [2024-12-07 08:16:04.385951] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.281 [2024-12-07 08:16:04.385961] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.281 [2024-12-07 08:16:04.385987] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.281 [2024-12-07 08:16:04.386000] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.281 08:16:04 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:53.540 [2024-12-07 08:16:04.659548] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.540 08:16:04 -- host/timeout.sh@92 -- # wait 100732 00:24:54.474 [2024-12-07 08:16:05.402217] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:01.068 00:25:01.068 Latency(us) 00:25:01.068 [2024-12-07T08:16:12.344Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.068 [2024-12-07T08:16:12.344Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:01.068 Verification LBA range: start 0x0 length 0x4000 00:25:01.068 NVMe0n1 : 10.00 10727.42 41.90 0.00 0.00 11912.72 983.04 3019898.88 00:25:01.068 [2024-12-07T08:16:12.344Z] =================================================================================================================== 00:25:01.068 [2024-12-07T08:16:12.344Z] Total : 10727.42 41.90 0.00 0.00 11912.72 983.04 3019898.88 00:25:01.068 0 00:25:01.068 08:16:12 -- host/timeout.sh@97 -- # rpc_pid=100854 00:25:01.069 08:16:12 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:01.069 08:16:12 -- host/timeout.sh@98 -- # sleep 1 00:25:01.327 Running I/O for 10 seconds... 00:25:02.262 08:16:13 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.525 [2024-12-07 08:16:13.539114] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.539603] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.539713] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.539808] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.539881] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.539933] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.539995] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.540075] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.540171] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.540327] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.540400] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.540461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.540538] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.540613] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.540684] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.540746] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.540826] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.540880] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.540943] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.541007] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.541072] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.541131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.541252] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.541322] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.541385] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.541459] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.541524] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.541600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.541663] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.541752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.541826] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.541904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.541980] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.542034] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.542106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.542170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.542269] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.542326] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.542390] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.542463] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.542549] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.542651] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.542724] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.542795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.542867] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.542929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.542989] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.543036] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.543103] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.543173] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.543281] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.543364] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.543432] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.543503] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.543566] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.543655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.543725] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.543797] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.543860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.543918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.543981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.544039] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e7f110 is same with the state(5) to be set 00:25:02.525 [2024-12-07 08:16:13.544400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.525 [2024-12-07 08:16:13.544442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.525 [2024-12-07 08:16:13.544465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.525 [2024-12-07 08:16:13.544476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.525 [2024-12-07 08:16:13.544496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.525 [2024-12-07 08:16:13.544506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.525 [2024-12-07 08:16:13.544517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.525 [2024-12-07 08:16:13.544527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.544987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.544995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.545006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.545014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.545025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.545033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.545043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.545052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.545062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.545071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.545081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.545090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.545100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.545109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.545119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.545128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.545138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.545146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.545156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.545165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.545175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.545184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.545194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.526 [2024-12-07 08:16:13.545219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.545247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.545269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.545282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.526 [2024-12-07 08:16:13.545292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.545303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.526 [2024-12-07 08:16:13.545313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.545324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.526 [2024-12-07 08:16:13.545333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.545344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.545353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.545364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.545373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.526 [2024-12-07 08:16:13.545384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.526 [2024-12-07 08:16:13.545393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.545414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.545434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.545454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.545473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.545493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.545514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.545534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.527 [2024-12-07 08:16:13.545555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.545576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.527 [2024-12-07 08:16:13.545608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.545639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.545658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.527 [2024-12-07 08:16:13.545677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.527 [2024-12-07 08:16:13.545696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.527 [2024-12-07 08:16:13.545715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.527 [2024-12-07 08:16:13.545762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.527 [2024-12-07 08:16:13.545782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.545802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.545822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.545841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.545861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.545882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.545902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.545922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.527 [2024-12-07 08:16:13.545944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.527 [2024-12-07 08:16:13.545964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.545984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.545995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.546004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.546015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.527 [2024-12-07 08:16:13.546024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.546034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.546044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.546055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.546079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.546090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.546106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.546117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.546140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.546150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.546159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.546169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.546178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.546188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.546196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.546217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.546238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.546249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.546258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.546279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.527 [2024-12-07 08:16:13.546289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.527 [2024-12-07 08:16:13.546300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.528 [2024-12-07 08:16:13.546332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.528 [2024-12-07 08:16:13.546373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.528 [2024-12-07 08:16:13.546413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.528 [2024-12-07 08:16:13.546473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.528 [2024-12-07 08:16:13.546493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.528 [2024-12-07 08:16:13.546703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.528 [2024-12-07 08:16:13.546722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.528 [2024-12-07 08:16:13.546760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.528 [2024-12-07 08:16:13.546893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.528 [2024-12-07 08:16:13.546912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.528 [2024-12-07 08:16:13.546950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.546988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.546999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.547007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.547017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.528 [2024-12-07 08:16:13.547026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.547036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.528 [2024-12-07 08:16:13.547045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.547056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.547065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.547075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.547083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.547093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.528 [2024-12-07 08:16:13.547102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.528 [2024-12-07 08:16:13.547113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.529 [2024-12-07 08:16:13.547121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.529 [2024-12-07 08:16:13.547132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.529 [2024-12-07 08:16:13.547140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.529 [2024-12-07 08:16:13.547150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.529 [2024-12-07 08:16:13.547159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.529 [2024-12-07 08:16:13.547169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.529 [2024-12-07 08:16:13.547177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.529 [2024-12-07 08:16:13.547187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf971d0 is same with the state(5) to be set 00:25:02.529 [2024-12-07 08:16:13.547197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:02.529 [2024-12-07 08:16:13.547221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:02.529 [2024-12-07 08:16:13.547247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5144 len:8 PRP1 0x0 PRP2 0x0 00:25:02.529 [2024-12-07 08:16:13.547264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.529 [2024-12-07 08:16:13.547317] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf971d0 was disconnected and freed. reset controller. 00:25:02.529 [2024-12-07 08:16:13.547551] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.529 [2024-12-07 08:16:13.547658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf468c0 (9): Bad file descriptor 00:25:02.529 [2024-12-07 08:16:13.547754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.529 [2024-12-07 08:16:13.547810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.529 [2024-12-07 08:16:13.547826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf468c0 with addr=10.0.0.2, port=4420 00:25:02.529 [2024-12-07 08:16:13.547836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf468c0 is same with the state(5) to be set 00:25:02.529 [2024-12-07 08:16:13.547854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf468c0 (9): Bad file descriptor 00:25:02.529 [2024-12-07 08:16:13.547869] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.529 [2024-12-07 08:16:13.547878] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.529 [2024-12-07 08:16:13.547887] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.529 [2024-12-07 08:16:13.547907] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.529 [2024-12-07 08:16:13.547918] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.529 08:16:13 -- host/timeout.sh@101 -- # sleep 3 00:25:03.465 [2024-12-07 08:16:14.547989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.465 [2024-12-07 08:16:14.548541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.465 [2024-12-07 08:16:14.548697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf468c0 with addr=10.0.0.2, port=4420 00:25:03.465 [2024-12-07 08:16:14.548779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf468c0 is same with the state(5) to be set 00:25:03.465 [2024-12-07 08:16:14.548878] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf468c0 (9): Bad file descriptor 00:25:03.465 [2024-12-07 08:16:14.548955] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.465 [2024-12-07 08:16:14.549039] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.465 [2024-12-07 08:16:14.549106] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.465 [2024-12-07 08:16:14.549187] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.465 [2024-12-07 08:16:14.549330] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:04.401 [2024-12-07 08:16:15.549509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.401 [2024-12-07 08:16:15.549897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.401 [2024-12-07 08:16:15.550008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf468c0 with addr=10.0.0.2, port=4420 00:25:04.401 [2024-12-07 08:16:15.550099] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf468c0 is same with the state(5) to be set 00:25:04.401 [2024-12-07 08:16:15.550195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf468c0 (9): Bad file descriptor 00:25:04.401 [2024-12-07 08:16:15.550300] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:04.401 [2024-12-07 08:16:15.550371] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:04.401 [2024-12-07 08:16:15.550458] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:04.401 [2024-12-07 08:16:15.550535] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:04.401 [2024-12-07 08:16:15.550618] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:05.335 [2024-12-07 08:16:16.550934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.335 [2024-12-07 08:16:16.551378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.335 [2024-12-07 08:16:16.551492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf468c0 with addr=10.0.0.2, port=4420 00:25:05.335 [2024-12-07 08:16:16.551569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf468c0 is same with the state(5) to be set 00:25:05.335 [2024-12-07 08:16:16.551804] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf468c0 (9): Bad file descriptor 00:25:05.335 [2024-12-07 08:16:16.552008] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:05.335 [2024-12-07 08:16:16.552095] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:05.335 [2024-12-07 08:16:16.552163] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:05.335 [2024-12-07 08:16:16.554559] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:05.335 [2024-12-07 08:16:16.554692] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:05.335 08:16:16 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:05.593 [2024-12-07 08:16:16.819679] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.593 08:16:16 -- host/timeout.sh@103 -- # wait 100854 00:25:06.525 [2024-12-07 08:16:17.572447] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:11.791 00:25:11.791 Latency(us) 00:25:11.791 [2024-12-07T08:16:23.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.791 [2024-12-07T08:16:23.067Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:11.791 Verification LBA range: start 0x0 length 0x4000 00:25:11.791 NVMe0n1 : 10.01 9172.48 35.83 6585.59 0.00 8105.68 580.89 3019898.88 00:25:11.791 [2024-12-07T08:16:23.067Z] =================================================================================================================== 00:25:11.791 [2024-12-07T08:16:23.067Z] Total : 9172.48 35.83 6585.59 0.00 8105.68 0.00 3019898.88 00:25:11.791 0 00:25:11.791 08:16:22 -- host/timeout.sh@105 -- # killprocess 100690 00:25:11.791 08:16:22 -- common/autotest_common.sh@936 -- # '[' -z 100690 ']' 00:25:11.791 08:16:22 -- common/autotest_common.sh@940 -- # kill -0 100690 00:25:11.791 08:16:22 -- common/autotest_common.sh@941 -- # uname 00:25:11.791 08:16:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:11.791 08:16:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100690 00:25:11.791 killing process with pid 100690 00:25:11.791 Received shutdown signal, test time was about 10.000000 seconds 00:25:11.791 00:25:11.791 Latency(us) 00:25:11.791 [2024-12-07T08:16:23.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.791 [2024-12-07T08:16:23.067Z] =================================================================================================================== 00:25:11.791 [2024-12-07T08:16:23.067Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:11.791 08:16:22 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:11.791 08:16:22 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:11.791 08:16:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100690' 00:25:11.791 08:16:22 -- common/autotest_common.sh@955 -- # kill 100690 00:25:11.791 08:16:22 -- common/autotest_common.sh@960 -- # wait 100690 00:25:11.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:11.791 08:16:22 -- host/timeout.sh@110 -- # bdevperf_pid=100975 00:25:11.791 08:16:22 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:25:11.791 08:16:22 -- host/timeout.sh@112 -- # waitforlisten 100975 /var/tmp/bdevperf.sock 00:25:11.791 08:16:22 -- common/autotest_common.sh@829 -- # '[' -z 100975 ']' 00:25:11.791 08:16:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:11.791 08:16:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:11.791 08:16:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:11.791 08:16:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:11.791 08:16:22 -- common/autotest_common.sh@10 -- # set +x 00:25:11.791 [2024-12-07 08:16:22.744172] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:11.791 [2024-12-07 08:16:22.744565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100975 ] 00:25:11.791 [2024-12-07 08:16:22.898369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.791 [2024-12-07 08:16:22.958704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:12.726 08:16:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:12.726 08:16:23 -- common/autotest_common.sh@862 -- # return 0 00:25:12.726 08:16:23 -- host/timeout.sh@116 -- # dtrace_pid=101005 00:25:12.726 08:16:23 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:25:12.726 08:16:23 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 100975 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:25:12.984 08:16:24 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:13.242 NVMe0n1 00:25:13.242 08:16:24 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:13.242 08:16:24 -- host/timeout.sh@124 -- # rpc_pid=101061 00:25:13.242 08:16:24 -- host/timeout.sh@125 -- # sleep 1 00:25:13.242 Running I/O for 10 seconds... 00:25:14.178 08:16:25 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.440 [2024-12-07 08:16:25.576980] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577058] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577070] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577093] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577101] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577110] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577117] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577125] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577133] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577140] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577155] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577163] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577177] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577185] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577192] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577200] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577264] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577272] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577280] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577288] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577296] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577304] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577312] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577320] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577328] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577336] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577352] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.577360] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e82ba0 is same with the state(5) to be set 00:25:14.440 [2024-12-07 08:16:25.578014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-12-07 08:16:25.578094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-12-07 08:16:25.578115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-12-07 08:16:25.578125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-12-07 08:16:25.578152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-12-07 08:16:25.578161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-12-07 08:16:25.578171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-12-07 08:16:25.578179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-12-07 08:16:25.578201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-12-07 08:16:25.578224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-12-07 08:16:25.578257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-12-07 08:16:25.578268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-12-07 08:16:25.578280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-12-07 08:16:25.578289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-12-07 08:16:25.578300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-12-07 08:16:25.578309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-12-07 08:16:25.578320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-12-07 08:16:25.578329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-12-07 08:16:25.578340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-12-07 08:16:25.578350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-12-07 08:16:25.578361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-12-07 08:16:25.578370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-12-07 08:16:25.578381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-12-07 08:16:25.578390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-12-07 08:16:25.578401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-12-07 08:16:25.578410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-12-07 08:16:25.578421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-12-07 08:16:25.578429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-12-07 08:16:25.578440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-12-07 08:16:25.578449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-12-07 08:16:25.578459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-12-07 08:16:25.578468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-12-07 08:16:25.578479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-12-07 08:16:25.578489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-12-07 08:16:25.578500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-12-07 08:16:25.578510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-12-07 08:16:25.578521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-12-07 08:16:25.578530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-12-07 08:16:25.578540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:124120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.578988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:34032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.578997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.579007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.579016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.579026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.579035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.579046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.579054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.579064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.579073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.579084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.579093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.579104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.579112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.579123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.579132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.579142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.579150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.579161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.579169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.579179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.579188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-12-07 08:16:25.579198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-12-07 08:16:25.579235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:52416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:33336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.579989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.579998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.580008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.580017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.580027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-12-07 08:16:25.580036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-12-07 08:16:25.580046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:52896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-12-07 08:16:25.580797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-12-07 08:16:25.580825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:14.444 [2024-12-07 08:16:25.580845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:14.444 [2024-12-07 08:16:25.580854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115264 len:8 PRP1 0x0 PRP2 0x0 00:25:14.444 [2024-12-07 08:16:25.580863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.444 [2024-12-07 08:16:25.580914] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1eb3780 was disconnected and freed. reset controller. 00:25:14.444 [2024-12-07 08:16:25.580991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.444 [2024-12-07 08:16:25.581006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.444 [2024-12-07 08:16:25.581016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.444 [2024-12-07 08:16:25.581026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.444 [2024-12-07 08:16:25.581035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.444 [2024-12-07 08:16:25.581043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.444 [2024-12-07 08:16:25.581052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.444 [2024-12-07 08:16:25.581061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.444 [2024-12-07 08:16:25.581069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2e8c0 is same with the state(5) to be set 00:25:14.444 [2024-12-07 08:16:25.581357] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.444 [2024-12-07 08:16:25.581389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2e8c0 (9): Bad file descriptor 00:25:14.444 [2024-12-07 08:16:25.581492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.444 [2024-12-07 08:16:25.581541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.444 [2024-12-07 08:16:25.581572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e2e8c0 with addr=10.0.0.2, port=4420 00:25:14.444 [2024-12-07 08:16:25.581583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2e8c0 is same with the state(5) to be set 00:25:14.444 [2024-12-07 08:16:25.581600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2e8c0 (9): Bad file descriptor 00:25:14.444 [2024-12-07 08:16:25.581616] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.444 [2024-12-07 08:16:25.581625] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.444 [2024-12-07 08:16:25.581635] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.444 [2024-12-07 08:16:25.581654] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.444 [2024-12-07 08:16:25.581663] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.444 08:16:25 -- host/timeout.sh@128 -- # wait 101061 00:25:16.346 [2024-12-07 08:16:27.581806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.346 [2024-12-07 08:16:27.581882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.346 [2024-12-07 08:16:27.581900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e2e8c0 with addr=10.0.0.2, port=4420 00:25:16.346 [2024-12-07 08:16:27.581913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2e8c0 is same with the state(5) to be set 00:25:16.346 [2024-12-07 08:16:27.581935] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2e8c0 (9): Bad file descriptor 00:25:16.346 [2024-12-07 08:16:27.581951] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:16.347 [2024-12-07 08:16:27.581960] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:16.347 [2024-12-07 08:16:27.581970] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:16.347 [2024-12-07 08:16:27.581992] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.347 [2024-12-07 08:16:27.582002] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:18.879 [2024-12-07 08:16:29.582132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.879 [2024-12-07 08:16:29.582241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.879 [2024-12-07 08:16:29.582259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e2e8c0 with addr=10.0.0.2, port=4420 00:25:18.879 [2024-12-07 08:16:29.582272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2e8c0 is same with the state(5) to be set 00:25:18.879 [2024-12-07 08:16:29.582291] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e2e8c0 (9): Bad file descriptor 00:25:18.879 [2024-12-07 08:16:29.582307] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:18.879 [2024-12-07 08:16:29.582315] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:18.879 [2024-12-07 08:16:29.582324] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:18.879 [2024-12-07 08:16:29.582344] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.879 [2024-12-07 08:16:29.582354] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:20.782 [2024-12-07 08:16:31.582487] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:20.782 [2024-12-07 08:16:31.582530] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:20.782 [2024-12-07 08:16:31.582557] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:20.782 [2024-12-07 08:16:31.582567] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:20.782 [2024-12-07 08:16:31.582590] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:21.350 00:25:21.350 Latency(us) 00:25:21.350 [2024-12-07T08:16:32.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.350 [2024-12-07T08:16:32.626Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:25:21.350 NVMe0n1 : 8.14 3135.07 12.25 15.73 0.00 40591.46 1936.29 7015926.69 00:25:21.350 [2024-12-07T08:16:32.626Z] =================================================================================================================== 00:25:21.350 [2024-12-07T08:16:32.626Z] Total : 3135.07 12.25 15.73 0.00 40591.46 1936.29 7015926.69 00:25:21.350 0 00:25:21.350 08:16:32 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:21.350 Attaching 5 probes... 00:25:21.350 1345.921245: reset bdev controller NVMe0 00:25:21.350 1345.997090: reconnect bdev controller NVMe0 00:25:21.350 3346.256925: reconnect delay bdev controller NVMe0 00:25:21.350 3346.273916: reconnect bdev controller NVMe0 00:25:21.350 5346.605090: reconnect delay bdev controller NVMe0 00:25:21.350 5346.620566: reconnect bdev controller NVMe0 00:25:21.350 7347.023144: reconnect delay bdev controller NVMe0 00:25:21.350 7347.040606: reconnect bdev controller NVMe0 00:25:21.350 08:16:32 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:25:21.350 08:16:32 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:25:21.350 08:16:32 -- host/timeout.sh@136 -- # kill 101005 00:25:21.350 08:16:32 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:21.350 08:16:32 -- host/timeout.sh@139 -- # killprocess 100975 00:25:21.350 08:16:32 -- common/autotest_common.sh@936 -- # '[' -z 100975 ']' 00:25:21.350 08:16:32 -- common/autotest_common.sh@940 -- # kill -0 100975 00:25:21.350 08:16:32 -- common/autotest_common.sh@941 -- # uname 00:25:21.350 08:16:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:21.350 08:16:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100975 00:25:21.610 killing process with pid 100975 00:25:21.610 Received shutdown signal, test time was about 8.204529 seconds 00:25:21.610 00:25:21.610 Latency(us) 00:25:21.610 [2024-12-07T08:16:32.886Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.610 [2024-12-07T08:16:32.886Z] =================================================================================================================== 00:25:21.610 [2024-12-07T08:16:32.886Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:21.610 08:16:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:21.610 08:16:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:21.610 08:16:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100975' 00:25:21.610 08:16:32 -- common/autotest_common.sh@955 -- # kill 100975 00:25:21.610 08:16:32 -- common/autotest_common.sh@960 -- # wait 100975 00:25:21.610 08:16:32 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:21.869 08:16:33 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:25:21.869 08:16:33 -- host/timeout.sh@145 -- # nvmftestfini 00:25:21.869 08:16:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:21.869 08:16:33 -- nvmf/common.sh@116 -- # sync 00:25:21.869 08:16:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:21.869 08:16:33 -- nvmf/common.sh@119 -- # set +e 00:25:21.869 08:16:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:21.869 08:16:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:21.869 rmmod nvme_tcp 00:25:21.869 rmmod nvme_fabrics 00:25:22.128 rmmod nvme_keyring 00:25:22.128 08:16:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:22.128 08:16:33 -- nvmf/common.sh@123 -- # set -e 00:25:22.128 08:16:33 -- nvmf/common.sh@124 -- # return 0 00:25:22.128 08:16:33 -- nvmf/common.sh@477 -- # '[' -n 100391 ']' 00:25:22.128 08:16:33 -- nvmf/common.sh@478 -- # killprocess 100391 00:25:22.128 08:16:33 -- common/autotest_common.sh@936 -- # '[' -z 100391 ']' 00:25:22.128 08:16:33 -- common/autotest_common.sh@940 -- # kill -0 100391 00:25:22.128 08:16:33 -- common/autotest_common.sh@941 -- # uname 00:25:22.128 08:16:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:22.128 08:16:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100391 00:25:22.128 killing process with pid 100391 00:25:22.128 08:16:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:22.128 08:16:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:22.128 08:16:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100391' 00:25:22.128 08:16:33 -- common/autotest_common.sh@955 -- # kill 100391 00:25:22.128 08:16:33 -- common/autotest_common.sh@960 -- # wait 100391 00:25:22.387 08:16:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:22.387 08:16:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:22.387 08:16:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:22.387 08:16:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:22.387 08:16:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:22.387 08:16:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.387 08:16:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.387 08:16:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.387 08:16:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:22.387 00:25:22.387 real 0m47.279s 00:25:22.387 user 2m19.313s 00:25:22.387 sys 0m5.058s 00:25:22.387 08:16:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:22.387 08:16:33 -- common/autotest_common.sh@10 -- # set +x 00:25:22.387 ************************************ 00:25:22.387 END TEST nvmf_timeout 00:25:22.387 ************************************ 00:25:22.387 08:16:33 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:25:22.387 08:16:33 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:25:22.387 08:16:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:22.387 08:16:33 -- common/autotest_common.sh@10 -- # set +x 00:25:22.388 08:16:33 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:25:22.388 00:25:22.388 real 17m28.297s 00:25:22.388 user 55m46.530s 00:25:22.388 sys 3m45.320s 00:25:22.388 08:16:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:22.388 08:16:33 -- common/autotest_common.sh@10 -- # set +x 00:25:22.388 ************************************ 00:25:22.388 END TEST nvmf_tcp 00:25:22.388 ************************************ 00:25:22.388 08:16:33 -- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]] 00:25:22.388 08:16:33 -- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:22.388 08:16:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:22.388 08:16:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:22.388 08:16:33 -- common/autotest_common.sh@10 -- # set +x 00:25:22.388 ************************************ 00:25:22.388 START TEST spdkcli_nvmf_tcp 00:25:22.388 ************************************ 00:25:22.388 08:16:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:22.388 * Looking for test storage... 00:25:22.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:22.388 08:16:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:22.388 08:16:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:22.388 08:16:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:22.647 08:16:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:22.647 08:16:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:22.647 08:16:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:22.647 08:16:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:22.647 08:16:33 -- scripts/common.sh@335 -- # IFS=.-: 00:25:22.647 08:16:33 -- scripts/common.sh@335 -- # read -ra ver1 00:25:22.647 08:16:33 -- scripts/common.sh@336 -- # IFS=.-: 00:25:22.647 08:16:33 -- scripts/common.sh@336 -- # read -ra ver2 00:25:22.647 08:16:33 -- scripts/common.sh@337 -- # local 'op=<' 00:25:22.647 08:16:33 -- scripts/common.sh@339 -- # ver1_l=2 00:25:22.647 08:16:33 -- scripts/common.sh@340 -- # ver2_l=1 00:25:22.647 08:16:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:22.647 08:16:33 -- scripts/common.sh@343 -- # case "$op" in 00:25:22.647 08:16:33 -- scripts/common.sh@344 -- # : 1 00:25:22.647 08:16:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:22.647 08:16:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:22.647 08:16:33 -- scripts/common.sh@364 -- # decimal 1 00:25:22.647 08:16:33 -- scripts/common.sh@352 -- # local d=1 00:25:22.647 08:16:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:22.647 08:16:33 -- scripts/common.sh@354 -- # echo 1 00:25:22.647 08:16:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:22.647 08:16:33 -- scripts/common.sh@365 -- # decimal 2 00:25:22.647 08:16:33 -- scripts/common.sh@352 -- # local d=2 00:25:22.647 08:16:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:22.647 08:16:33 -- scripts/common.sh@354 -- # echo 2 00:25:22.647 08:16:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:22.648 08:16:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:22.648 08:16:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:22.648 08:16:33 -- scripts/common.sh@367 -- # return 0 00:25:22.648 08:16:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:22.648 08:16:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:22.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.648 --rc genhtml_branch_coverage=1 00:25:22.648 --rc genhtml_function_coverage=1 00:25:22.648 --rc genhtml_legend=1 00:25:22.648 --rc geninfo_all_blocks=1 00:25:22.648 --rc geninfo_unexecuted_blocks=1 00:25:22.648 00:25:22.648 ' 00:25:22.648 08:16:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:22.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.648 --rc genhtml_branch_coverage=1 00:25:22.648 --rc genhtml_function_coverage=1 00:25:22.648 --rc genhtml_legend=1 00:25:22.648 --rc geninfo_all_blocks=1 00:25:22.648 --rc geninfo_unexecuted_blocks=1 00:25:22.648 00:25:22.648 ' 00:25:22.648 08:16:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:22.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.648 --rc genhtml_branch_coverage=1 00:25:22.648 --rc genhtml_function_coverage=1 00:25:22.648 --rc genhtml_legend=1 00:25:22.648 --rc geninfo_all_blocks=1 00:25:22.648 --rc geninfo_unexecuted_blocks=1 00:25:22.648 00:25:22.648 ' 00:25:22.648 08:16:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:22.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.648 --rc genhtml_branch_coverage=1 00:25:22.648 --rc genhtml_function_coverage=1 00:25:22.648 --rc genhtml_legend=1 00:25:22.648 --rc geninfo_all_blocks=1 00:25:22.648 --rc geninfo_unexecuted_blocks=1 00:25:22.648 00:25:22.648 ' 00:25:22.648 08:16:33 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:22.648 08:16:33 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:22.648 08:16:33 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:22.648 08:16:33 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:22.648 08:16:33 -- nvmf/common.sh@7 -- # uname -s 00:25:22.648 08:16:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.648 08:16:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.648 08:16:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.648 08:16:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.648 08:16:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.648 08:16:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.648 08:16:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.648 08:16:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.648 08:16:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.648 08:16:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.648 08:16:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:25:22.648 08:16:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:25:22.648 08:16:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.648 08:16:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.648 08:16:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:22.648 08:16:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:22.648 08:16:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.648 08:16:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.648 08:16:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.648 08:16:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.648 08:16:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.648 08:16:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.648 08:16:33 -- paths/export.sh@5 -- # export PATH 00:25:22.648 08:16:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.648 08:16:33 -- nvmf/common.sh@46 -- # : 0 00:25:22.648 08:16:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:22.648 08:16:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:22.648 08:16:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:22.648 08:16:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.648 08:16:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.648 08:16:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:22.648 08:16:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:22.648 08:16:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:22.648 08:16:33 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:22.648 08:16:33 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:22.648 08:16:33 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:22.648 08:16:33 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:22.648 08:16:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:22.648 08:16:33 -- common/autotest_common.sh@10 -- # set +x 00:25:22.648 08:16:33 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:22.648 08:16:33 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=101281 00:25:22.648 08:16:33 -- spdkcli/common.sh@34 -- # waitforlisten 101281 00:25:22.648 08:16:33 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:22.648 08:16:33 -- common/autotest_common.sh@829 -- # '[' -z 101281 ']' 00:25:22.648 08:16:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.648 08:16:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:22.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.648 08:16:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.648 08:16:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:22.648 08:16:33 -- common/autotest_common.sh@10 -- # set +x 00:25:22.648 [2024-12-07 08:16:33.812805] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:22.648 [2024-12-07 08:16:33.812895] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101281 ] 00:25:22.908 [2024-12-07 08:16:33.943714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:22.908 [2024-12-07 08:16:34.004219] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:22.908 [2024-12-07 08:16:34.004716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.908 [2024-12-07 08:16:34.004728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.845 08:16:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:23.845 08:16:34 -- common/autotest_common.sh@862 -- # return 0 00:25:23.845 08:16:34 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:23.845 08:16:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:23.845 08:16:34 -- common/autotest_common.sh@10 -- # set +x 00:25:23.845 08:16:34 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:23.845 08:16:34 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:23.845 08:16:34 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:23.845 08:16:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:23.845 08:16:34 -- common/autotest_common.sh@10 -- # set +x 00:25:23.845 08:16:34 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:23.845 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:23.845 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:23.845 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:23.845 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:23.845 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:23.845 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:23.845 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:23.845 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:23.845 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:23.845 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:23.845 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:23.845 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:23.845 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:23.845 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:23.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:23.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:23.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:23.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:23.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:23.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:23.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:23.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:23.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:23.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:23.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:23.846 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:23.846 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:23.846 ' 00:25:24.103 [2024-12-07 08:16:35.336375] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:26.636 [2024-12-07 08:16:37.588649] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.011 [2024-12-07 08:16:38.877668] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:30.540 [2024-12-07 08:16:41.255395] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:32.441 [2024-12-07 08:16:43.308680] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:33.818 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:33.819 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:33.819 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:33.819 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:33.819 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:33.819 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:33.819 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:33.819 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:33.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:33.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:33.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:33.819 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:33.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:33.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:33.819 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:33.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:33.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:33.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:33.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:33.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:33.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:33.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:33.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:33.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:33.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:33.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:33.819 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:33.819 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:33.819 08:16:44 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:33.819 08:16:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:33.819 08:16:44 -- common/autotest_common.sh@10 -- # set +x 00:25:33.819 08:16:45 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:33.819 08:16:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:33.819 08:16:45 -- common/autotest_common.sh@10 -- # set +x 00:25:33.819 08:16:45 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:33.819 08:16:45 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:34.388 08:16:45 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:34.388 08:16:45 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:34.388 08:16:45 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:34.388 08:16:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:34.388 08:16:45 -- common/autotest_common.sh@10 -- # set +x 00:25:34.388 08:16:45 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:34.388 08:16:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:34.388 08:16:45 -- common/autotest_common.sh@10 -- # set +x 00:25:34.388 08:16:45 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:34.388 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:34.388 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:34.388 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:34.388 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:34.388 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:34.388 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:34.388 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:34.388 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:34.388 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:34.388 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:34.388 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:34.388 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:34.388 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:34.388 ' 00:25:39.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:39.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:39.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:39.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:39.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:39.711 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:39.711 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:39.711 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:39.711 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:39.711 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:39.711 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:39.711 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:39.711 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:39.711 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:39.970 08:16:51 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:39.970 08:16:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:39.970 08:16:51 -- common/autotest_common.sh@10 -- # set +x 00:25:39.970 08:16:51 -- spdkcli/nvmf.sh@90 -- # killprocess 101281 00:25:39.970 08:16:51 -- common/autotest_common.sh@936 -- # '[' -z 101281 ']' 00:25:39.970 08:16:51 -- common/autotest_common.sh@940 -- # kill -0 101281 00:25:39.970 08:16:51 -- common/autotest_common.sh@941 -- # uname 00:25:39.970 08:16:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:39.970 08:16:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101281 00:25:39.970 08:16:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:39.970 08:16:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:39.970 08:16:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101281' 00:25:39.970 killing process with pid 101281 00:25:39.970 08:16:51 -- common/autotest_common.sh@955 -- # kill 101281 00:25:39.970 [2024-12-07 08:16:51.162768] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:39.970 08:16:51 -- common/autotest_common.sh@960 -- # wait 101281 00:25:40.229 Process with pid 101281 is not found 00:25:40.229 08:16:51 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:40.229 08:16:51 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:40.229 08:16:51 -- spdkcli/common.sh@13 -- # '[' -n 101281 ']' 00:25:40.229 08:16:51 -- spdkcli/common.sh@14 -- # killprocess 101281 00:25:40.229 08:16:51 -- common/autotest_common.sh@936 -- # '[' -z 101281 ']' 00:25:40.229 08:16:51 -- common/autotest_common.sh@940 -- # kill -0 101281 00:25:40.229 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (101281) - No such process 00:25:40.229 08:16:51 -- common/autotest_common.sh@963 -- # echo 'Process with pid 101281 is not found' 00:25:40.229 08:16:51 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:40.229 08:16:51 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:40.229 08:16:51 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:40.229 ************************************ 00:25:40.229 END TEST spdkcli_nvmf_tcp 00:25:40.229 ************************************ 00:25:40.229 00:25:40.229 real 0m17.790s 00:25:40.229 user 0m38.666s 00:25:40.229 sys 0m0.888s 00:25:40.229 08:16:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:40.229 08:16:51 -- common/autotest_common.sh@10 -- # set +x 00:25:40.229 08:16:51 -- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:40.229 08:16:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:40.229 08:16:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:40.229 08:16:51 -- common/autotest_common.sh@10 -- # set +x 00:25:40.229 ************************************ 00:25:40.229 START TEST nvmf_identify_passthru 00:25:40.229 ************************************ 00:25:40.229 08:16:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:40.229 * Looking for test storage... 00:25:40.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:40.229 08:16:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:40.229 08:16:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:40.229 08:16:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:40.488 08:16:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:40.488 08:16:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:40.488 08:16:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:40.488 08:16:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:40.488 08:16:51 -- scripts/common.sh@335 -- # IFS=.-: 00:25:40.488 08:16:51 -- scripts/common.sh@335 -- # read -ra ver1 00:25:40.488 08:16:51 -- scripts/common.sh@336 -- # IFS=.-: 00:25:40.488 08:16:51 -- scripts/common.sh@336 -- # read -ra ver2 00:25:40.488 08:16:51 -- scripts/common.sh@337 -- # local 'op=<' 00:25:40.488 08:16:51 -- scripts/common.sh@339 -- # ver1_l=2 00:25:40.488 08:16:51 -- scripts/common.sh@340 -- # ver2_l=1 00:25:40.488 08:16:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:40.488 08:16:51 -- scripts/common.sh@343 -- # case "$op" in 00:25:40.488 08:16:51 -- scripts/common.sh@344 -- # : 1 00:25:40.488 08:16:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:40.488 08:16:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:40.488 08:16:51 -- scripts/common.sh@364 -- # decimal 1 00:25:40.488 08:16:51 -- scripts/common.sh@352 -- # local d=1 00:25:40.488 08:16:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:40.488 08:16:51 -- scripts/common.sh@354 -- # echo 1 00:25:40.488 08:16:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:40.488 08:16:51 -- scripts/common.sh@365 -- # decimal 2 00:25:40.488 08:16:51 -- scripts/common.sh@352 -- # local d=2 00:25:40.488 08:16:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:40.488 08:16:51 -- scripts/common.sh@354 -- # echo 2 00:25:40.488 08:16:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:40.488 08:16:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:40.488 08:16:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:40.488 08:16:51 -- scripts/common.sh@367 -- # return 0 00:25:40.488 08:16:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:40.488 08:16:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:40.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.488 --rc genhtml_branch_coverage=1 00:25:40.488 --rc genhtml_function_coverage=1 00:25:40.488 --rc genhtml_legend=1 00:25:40.488 --rc geninfo_all_blocks=1 00:25:40.488 --rc geninfo_unexecuted_blocks=1 00:25:40.488 00:25:40.488 ' 00:25:40.488 08:16:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:40.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.488 --rc genhtml_branch_coverage=1 00:25:40.488 --rc genhtml_function_coverage=1 00:25:40.488 --rc genhtml_legend=1 00:25:40.488 --rc geninfo_all_blocks=1 00:25:40.488 --rc geninfo_unexecuted_blocks=1 00:25:40.488 00:25:40.488 ' 00:25:40.488 08:16:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:40.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.488 --rc genhtml_branch_coverage=1 00:25:40.488 --rc genhtml_function_coverage=1 00:25:40.488 --rc genhtml_legend=1 00:25:40.488 --rc geninfo_all_blocks=1 00:25:40.488 --rc geninfo_unexecuted_blocks=1 00:25:40.488 00:25:40.488 ' 00:25:40.488 08:16:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:40.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.488 --rc genhtml_branch_coverage=1 00:25:40.488 --rc genhtml_function_coverage=1 00:25:40.488 --rc genhtml_legend=1 00:25:40.488 --rc geninfo_all_blocks=1 00:25:40.488 --rc geninfo_unexecuted_blocks=1 00:25:40.488 00:25:40.488 ' 00:25:40.488 08:16:51 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:40.489 08:16:51 -- nvmf/common.sh@7 -- # uname -s 00:25:40.489 08:16:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.489 08:16:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.489 08:16:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.489 08:16:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.489 08:16:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.489 08:16:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.489 08:16:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.489 08:16:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.489 08:16:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.489 08:16:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.489 08:16:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:25:40.489 08:16:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:25:40.489 08:16:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.489 08:16:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.489 08:16:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:40.489 08:16:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:40.489 08:16:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.489 08:16:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.489 08:16:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.489 08:16:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.489 08:16:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.489 08:16:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.489 08:16:51 -- paths/export.sh@5 -- # export PATH 00:25:40.489 08:16:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.489 08:16:51 -- nvmf/common.sh@46 -- # : 0 00:25:40.489 08:16:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:40.489 08:16:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:40.489 08:16:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:40.489 08:16:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.489 08:16:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.489 08:16:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:40.489 08:16:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:40.489 08:16:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:40.489 08:16:51 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:40.489 08:16:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.489 08:16:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.489 08:16:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.489 08:16:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.489 08:16:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.489 08:16:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.489 08:16:51 -- paths/export.sh@5 -- # export PATH 00:25:40.489 08:16:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.489 08:16:51 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:40.489 08:16:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:40.489 08:16:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.489 08:16:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:40.489 08:16:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:40.489 08:16:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:40.489 08:16:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.489 08:16:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:40.489 08:16:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.489 08:16:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:40.489 08:16:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:40.489 08:16:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:40.489 08:16:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:40.489 08:16:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:40.489 08:16:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:40.489 08:16:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.489 08:16:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.489 08:16:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:40.489 08:16:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:40.489 08:16:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:40.489 08:16:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:40.489 08:16:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:40.489 08:16:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.489 08:16:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:40.489 08:16:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:40.489 08:16:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:40.489 08:16:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:40.489 08:16:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:40.489 08:16:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:40.489 Cannot find device "nvmf_tgt_br" 00:25:40.489 08:16:51 -- nvmf/common.sh@154 -- # true 00:25:40.489 08:16:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:40.489 Cannot find device "nvmf_tgt_br2" 00:25:40.489 08:16:51 -- nvmf/common.sh@155 -- # true 00:25:40.489 08:16:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:40.489 08:16:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:40.489 Cannot find device "nvmf_tgt_br" 00:25:40.489 08:16:51 -- nvmf/common.sh@157 -- # true 00:25:40.489 08:16:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:40.489 Cannot find device "nvmf_tgt_br2" 00:25:40.489 08:16:51 -- nvmf/common.sh@158 -- # true 00:25:40.489 08:16:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:40.489 08:16:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:40.489 08:16:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:40.489 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:40.489 08:16:51 -- nvmf/common.sh@161 -- # true 00:25:40.489 08:16:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:40.489 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:40.748 08:16:51 -- nvmf/common.sh@162 -- # true 00:25:40.748 08:16:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:40.748 08:16:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:40.748 08:16:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:40.748 08:16:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:40.748 08:16:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:40.748 08:16:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:40.748 08:16:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:40.748 08:16:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:40.748 08:16:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:40.748 08:16:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:40.748 08:16:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:40.748 08:16:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:40.748 08:16:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:40.748 08:16:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:40.748 08:16:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:40.748 08:16:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:40.748 08:16:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:40.748 08:16:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:40.748 08:16:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:40.748 08:16:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:40.748 08:16:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:40.748 08:16:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:40.748 08:16:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:40.748 08:16:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:40.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:25:40.748 00:25:40.748 --- 10.0.0.2 ping statistics --- 00:25:40.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.748 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:25:40.748 08:16:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:40.748 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:40.748 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:25:40.748 00:25:40.748 --- 10.0.0.3 ping statistics --- 00:25:40.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.748 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:25:40.748 08:16:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:40.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:25:40.748 00:25:40.748 --- 10.0.0.1 ping statistics --- 00:25:40.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.748 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:25:40.748 08:16:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.748 08:16:51 -- nvmf/common.sh@421 -- # return 0 00:25:40.748 08:16:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:40.748 08:16:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.748 08:16:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:40.748 08:16:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:40.748 08:16:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.748 08:16:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:40.748 08:16:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:40.748 08:16:51 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:40.748 08:16:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:40.748 08:16:51 -- common/autotest_common.sh@10 -- # set +x 00:25:40.748 08:16:51 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:40.748 08:16:51 -- common/autotest_common.sh@1519 -- # bdfs=() 00:25:40.748 08:16:51 -- common/autotest_common.sh@1519 -- # local bdfs 00:25:40.748 08:16:51 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:25:40.748 08:16:51 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:25:40.748 08:16:51 -- common/autotest_common.sh@1508 -- # bdfs=() 00:25:40.748 08:16:51 -- common/autotest_common.sh@1508 -- # local bdfs 00:25:40.748 08:16:51 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:40.748 08:16:51 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:40.748 08:16:51 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:25:40.748 08:16:51 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:25:40.748 08:16:51 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:40.748 08:16:51 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:25:40.748 08:16:51 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:25:40.748 08:16:51 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:25:40.748 08:16:51 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:40.748 08:16:51 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:40.748 08:16:51 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:41.007 08:16:52 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:41.007 08:16:52 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:41.007 08:16:52 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:41.007 08:16:52 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:41.266 08:16:52 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:41.266 08:16:52 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:41.266 08:16:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:41.266 08:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:41.266 08:16:52 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:41.266 08:16:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:41.266 08:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:41.266 08:16:52 -- target/identify_passthru.sh@31 -- # nvmfpid=101788 00:25:41.266 08:16:52 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:41.266 08:16:52 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:41.266 08:16:52 -- target/identify_passthru.sh@35 -- # waitforlisten 101788 00:25:41.266 08:16:52 -- common/autotest_common.sh@829 -- # '[' -z 101788 ']' 00:25:41.266 08:16:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.266 08:16:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:41.266 08:16:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.266 08:16:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:41.266 08:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:41.266 [2024-12-07 08:16:52.454468] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:41.266 [2024-12-07 08:16:52.454751] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.525 [2024-12-07 08:16:52.595615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:41.525 [2024-12-07 08:16:52.655247] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:41.525 [2024-12-07 08:16:52.655658] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.525 [2024-12-07 08:16:52.655709] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.525 [2024-12-07 08:16:52.655830] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.525 [2024-12-07 08:16:52.656011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.525 [2024-12-07 08:16:52.656577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:41.525 [2024-12-07 08:16:52.656583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.525 [2024-12-07 08:16:52.656309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:41.525 08:16:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:41.525 08:16:52 -- common/autotest_common.sh@862 -- # return 0 00:25:41.525 08:16:52 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:41.525 08:16:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.525 08:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:41.525 08:16:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.525 08:16:52 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:41.525 08:16:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.525 08:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:41.525 [2024-12-07 08:16:52.795492] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:41.784 08:16:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.784 08:16:52 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:41.784 08:16:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.784 08:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:41.784 [2024-12-07 08:16:52.809460] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.784 08:16:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.784 08:16:52 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:41.784 08:16:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:41.784 08:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:41.784 08:16:52 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:25:41.784 08:16:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.784 08:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:41.784 Nvme0n1 00:25:41.784 08:16:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.784 08:16:52 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:41.784 08:16:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.784 08:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:41.784 08:16:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.784 08:16:52 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:41.784 08:16:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.784 08:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:41.784 08:16:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.784 08:16:52 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:41.784 08:16:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.784 08:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:41.784 [2024-12-07 08:16:52.948884] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.784 08:16:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.784 08:16:52 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:41.784 08:16:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.784 08:16:52 -- common/autotest_common.sh@10 -- # set +x 00:25:41.784 [2024-12-07 08:16:52.956673] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:41.784 [ 00:25:41.784 { 00:25:41.784 "allow_any_host": true, 00:25:41.784 "hosts": [], 00:25:41.784 "listen_addresses": [], 00:25:41.784 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:41.784 "subtype": "Discovery" 00:25:41.784 }, 00:25:41.784 { 00:25:41.784 "allow_any_host": true, 00:25:41.784 "hosts": [], 00:25:41.784 "listen_addresses": [ 00:25:41.784 { 00:25:41.784 "adrfam": "IPv4", 00:25:41.784 "traddr": "10.0.0.2", 00:25:41.784 "transport": "TCP", 00:25:41.784 "trsvcid": "4420", 00:25:41.784 "trtype": "TCP" 00:25:41.784 } 00:25:41.784 ], 00:25:41.784 "max_cntlid": 65519, 00:25:41.784 "max_namespaces": 1, 00:25:41.784 "min_cntlid": 1, 00:25:41.784 "model_number": "SPDK bdev Controller", 00:25:41.784 "namespaces": [ 00:25:41.784 { 00:25:41.784 "bdev_name": "Nvme0n1", 00:25:41.784 "name": "Nvme0n1", 00:25:41.784 "nguid": "4959B32BE0FA4814912B7ECDEB9CD873", 00:25:41.784 "nsid": 1, 00:25:41.784 "uuid": "4959b32b-e0fa-4814-912b-7ecdeb9cd873" 00:25:41.784 } 00:25:41.784 ], 00:25:41.784 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.784 "serial_number": "SPDK00000000000001", 00:25:41.784 "subtype": "NVMe" 00:25:41.784 } 00:25:41.784 ] 00:25:41.784 08:16:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.784 08:16:52 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:41.784 08:16:52 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:41.784 08:16:52 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:42.043 08:16:53 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:42.043 08:16:53 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:42.043 08:16:53 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:42.043 08:16:53 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:42.302 08:16:53 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:42.302 08:16:53 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:42.302 08:16:53 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:42.302 08:16:53 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:42.302 08:16:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.302 08:16:53 -- common/autotest_common.sh@10 -- # set +x 00:25:42.302 08:16:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.302 08:16:53 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:42.302 08:16:53 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:42.302 08:16:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:42.302 08:16:53 -- nvmf/common.sh@116 -- # sync 00:25:42.302 08:16:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:42.302 08:16:53 -- nvmf/common.sh@119 -- # set +e 00:25:42.302 08:16:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:42.302 08:16:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:42.302 rmmod nvme_tcp 00:25:42.302 rmmod nvme_fabrics 00:25:42.302 rmmod nvme_keyring 00:25:42.302 08:16:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:42.302 08:16:53 -- nvmf/common.sh@123 -- # set -e 00:25:42.302 08:16:53 -- nvmf/common.sh@124 -- # return 0 00:25:42.302 08:16:53 -- nvmf/common.sh@477 -- # '[' -n 101788 ']' 00:25:42.302 08:16:53 -- nvmf/common.sh@478 -- # killprocess 101788 00:25:42.302 08:16:53 -- common/autotest_common.sh@936 -- # '[' -z 101788 ']' 00:25:42.302 08:16:53 -- common/autotest_common.sh@940 -- # kill -0 101788 00:25:42.302 08:16:53 -- common/autotest_common.sh@941 -- # uname 00:25:42.302 08:16:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:42.302 08:16:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101788 00:25:42.302 killing process with pid 101788 00:25:42.302 08:16:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:42.302 08:16:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:42.302 08:16:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101788' 00:25:42.302 08:16:53 -- common/autotest_common.sh@955 -- # kill 101788 00:25:42.302 [2024-12-07 08:16:53.570967] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:42.302 08:16:53 -- common/autotest_common.sh@960 -- # wait 101788 00:25:42.561 08:16:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:42.561 08:16:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:42.561 08:16:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:42.561 08:16:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:42.561 08:16:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:42.561 08:16:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.561 08:16:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:42.561 08:16:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.561 08:16:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:42.561 ************************************ 00:25:42.561 END TEST nvmf_identify_passthru 00:25:42.561 ************************************ 00:25:42.561 00:25:42.561 real 0m2.393s 00:25:42.561 user 0m4.762s 00:25:42.561 sys 0m0.790s 00:25:42.561 08:16:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:42.561 08:16:53 -- common/autotest_common.sh@10 -- # set +x 00:25:42.820 08:16:53 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:42.820 08:16:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:42.820 08:16:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:42.820 08:16:53 -- common/autotest_common.sh@10 -- # set +x 00:25:42.820 ************************************ 00:25:42.820 START TEST nvmf_dif 00:25:42.820 ************************************ 00:25:42.820 08:16:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:42.820 * Looking for test storage... 00:25:42.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:42.820 08:16:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:42.820 08:16:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:42.820 08:16:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:42.820 08:16:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:42.820 08:16:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:42.820 08:16:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:42.820 08:16:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:42.820 08:16:54 -- scripts/common.sh@335 -- # IFS=.-: 00:25:42.820 08:16:54 -- scripts/common.sh@335 -- # read -ra ver1 00:25:42.820 08:16:54 -- scripts/common.sh@336 -- # IFS=.-: 00:25:42.820 08:16:54 -- scripts/common.sh@336 -- # read -ra ver2 00:25:42.820 08:16:54 -- scripts/common.sh@337 -- # local 'op=<' 00:25:42.820 08:16:54 -- scripts/common.sh@339 -- # ver1_l=2 00:25:42.820 08:16:54 -- scripts/common.sh@340 -- # ver2_l=1 00:25:42.820 08:16:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:42.820 08:16:54 -- scripts/common.sh@343 -- # case "$op" in 00:25:42.820 08:16:54 -- scripts/common.sh@344 -- # : 1 00:25:42.820 08:16:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:42.820 08:16:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:42.820 08:16:54 -- scripts/common.sh@364 -- # decimal 1 00:25:42.820 08:16:54 -- scripts/common.sh@352 -- # local d=1 00:25:42.820 08:16:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:42.821 08:16:54 -- scripts/common.sh@354 -- # echo 1 00:25:42.821 08:16:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:42.821 08:16:54 -- scripts/common.sh@365 -- # decimal 2 00:25:42.821 08:16:54 -- scripts/common.sh@352 -- # local d=2 00:25:42.821 08:16:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:42.821 08:16:54 -- scripts/common.sh@354 -- # echo 2 00:25:42.821 08:16:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:42.821 08:16:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:42.821 08:16:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:42.821 08:16:54 -- scripts/common.sh@367 -- # return 0 00:25:42.821 08:16:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:42.821 08:16:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:42.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.821 --rc genhtml_branch_coverage=1 00:25:42.821 --rc genhtml_function_coverage=1 00:25:42.821 --rc genhtml_legend=1 00:25:42.821 --rc geninfo_all_blocks=1 00:25:42.821 --rc geninfo_unexecuted_blocks=1 00:25:42.821 00:25:42.821 ' 00:25:42.821 08:16:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:42.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.821 --rc genhtml_branch_coverage=1 00:25:42.821 --rc genhtml_function_coverage=1 00:25:42.821 --rc genhtml_legend=1 00:25:42.821 --rc geninfo_all_blocks=1 00:25:42.821 --rc geninfo_unexecuted_blocks=1 00:25:42.821 00:25:42.821 ' 00:25:42.821 08:16:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:42.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.821 --rc genhtml_branch_coverage=1 00:25:42.821 --rc genhtml_function_coverage=1 00:25:42.821 --rc genhtml_legend=1 00:25:42.821 --rc geninfo_all_blocks=1 00:25:42.821 --rc geninfo_unexecuted_blocks=1 00:25:42.821 00:25:42.821 ' 00:25:42.821 08:16:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:42.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.821 --rc genhtml_branch_coverage=1 00:25:42.821 --rc genhtml_function_coverage=1 00:25:42.821 --rc genhtml_legend=1 00:25:42.821 --rc geninfo_all_blocks=1 00:25:42.821 --rc geninfo_unexecuted_blocks=1 00:25:42.821 00:25:42.821 ' 00:25:42.821 08:16:54 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:42.821 08:16:54 -- nvmf/common.sh@7 -- # uname -s 00:25:42.821 08:16:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.821 08:16:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.821 08:16:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.821 08:16:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.821 08:16:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.821 08:16:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.821 08:16:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.821 08:16:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.821 08:16:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.821 08:16:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.821 08:16:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:25:42.821 08:16:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:25:42.821 08:16:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.821 08:16:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.821 08:16:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:42.821 08:16:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:42.821 08:16:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.821 08:16:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.821 08:16:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.821 08:16:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.821 08:16:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.821 08:16:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.821 08:16:54 -- paths/export.sh@5 -- # export PATH 00:25:42.821 08:16:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.821 08:16:54 -- nvmf/common.sh@46 -- # : 0 00:25:42.821 08:16:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:42.821 08:16:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:42.821 08:16:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:42.821 08:16:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.821 08:16:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.821 08:16:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:42.821 08:16:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:42.821 08:16:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:42.821 08:16:54 -- target/dif.sh@15 -- # NULL_META=16 00:25:42.821 08:16:54 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:42.821 08:16:54 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:42.821 08:16:54 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:42.821 08:16:54 -- target/dif.sh@135 -- # nvmftestinit 00:25:42.821 08:16:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:42.821 08:16:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.821 08:16:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:42.821 08:16:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:42.821 08:16:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:42.821 08:16:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.821 08:16:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:42.821 08:16:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.821 08:16:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:42.821 08:16:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:42.821 08:16:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:42.821 08:16:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:42.821 08:16:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:42.821 08:16:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:42.821 08:16:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:42.821 08:16:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:42.821 08:16:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:42.821 08:16:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:42.821 08:16:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:42.821 08:16:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:42.821 08:16:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:42.821 08:16:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:42.821 08:16:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:42.821 08:16:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:42.821 08:16:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:42.821 08:16:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:42.821 08:16:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:42.821 08:16:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:42.821 Cannot find device "nvmf_tgt_br" 00:25:42.822 08:16:54 -- nvmf/common.sh@154 -- # true 00:25:42.822 08:16:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:43.081 Cannot find device "nvmf_tgt_br2" 00:25:43.081 08:16:54 -- nvmf/common.sh@155 -- # true 00:25:43.081 08:16:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:43.081 08:16:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:43.081 Cannot find device "nvmf_tgt_br" 00:25:43.081 08:16:54 -- nvmf/common.sh@157 -- # true 00:25:43.081 08:16:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:43.081 Cannot find device "nvmf_tgt_br2" 00:25:43.081 08:16:54 -- nvmf/common.sh@158 -- # true 00:25:43.081 08:16:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:43.081 08:16:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:43.081 08:16:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:43.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:43.081 08:16:54 -- nvmf/common.sh@161 -- # true 00:25:43.081 08:16:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:43.081 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:43.081 08:16:54 -- nvmf/common.sh@162 -- # true 00:25:43.081 08:16:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:43.081 08:16:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:43.081 08:16:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:43.081 08:16:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:43.081 08:16:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:43.081 08:16:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:43.081 08:16:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:43.081 08:16:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:43.081 08:16:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:43.081 08:16:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:43.081 08:16:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:43.081 08:16:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:43.081 08:16:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:43.081 08:16:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:43.081 08:16:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:43.081 08:16:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:43.081 08:16:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:43.081 08:16:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:43.081 08:16:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:43.081 08:16:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:43.081 08:16:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:43.081 08:16:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:43.081 08:16:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:43.081 08:16:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:43.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:43.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:25:43.081 00:25:43.081 --- 10.0.0.2 ping statistics --- 00:25:43.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.081 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:25:43.081 08:16:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:43.081 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:43.081 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:25:43.081 00:25:43.081 --- 10.0.0.3 ping statistics --- 00:25:43.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.081 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:25:43.081 08:16:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:43.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:43.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:25:43.340 00:25:43.340 --- 10.0.0.1 ping statistics --- 00:25:43.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.340 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:25:43.340 08:16:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:43.340 08:16:54 -- nvmf/common.sh@421 -- # return 0 00:25:43.340 08:16:54 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:43.340 08:16:54 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:43.599 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:43.599 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:43.599 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:43.599 08:16:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:43.599 08:16:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:43.599 08:16:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:43.599 08:16:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:43.599 08:16:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:43.599 08:16:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:43.599 08:16:54 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:43.599 08:16:54 -- target/dif.sh@137 -- # nvmfappstart 00:25:43.599 08:16:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:43.599 08:16:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:43.599 08:16:54 -- common/autotest_common.sh@10 -- # set +x 00:25:43.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.599 08:16:54 -- nvmf/common.sh@469 -- # nvmfpid=102126 00:25:43.599 08:16:54 -- nvmf/common.sh@470 -- # waitforlisten 102126 00:25:43.599 08:16:54 -- common/autotest_common.sh@829 -- # '[' -z 102126 ']' 00:25:43.599 08:16:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.599 08:16:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:43.599 08:16:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:43.599 08:16:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.599 08:16:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:43.599 08:16:54 -- common/autotest_common.sh@10 -- # set +x 00:25:43.599 [2024-12-07 08:16:54.818979] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:43.599 [2024-12-07 08:16:54.819256] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:43.858 [2024-12-07 08:16:54.962122] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.858 [2024-12-07 08:16:55.035686] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:43.858 [2024-12-07 08:16:55.035855] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:43.858 [2024-12-07 08:16:55.035872] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:43.858 [2024-12-07 08:16:55.035884] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:43.858 [2024-12-07 08:16:55.035919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.792 08:16:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:44.792 08:16:55 -- common/autotest_common.sh@862 -- # return 0 00:25:44.792 08:16:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:44.792 08:16:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:44.792 08:16:55 -- common/autotest_common.sh@10 -- # set +x 00:25:44.792 08:16:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:44.792 08:16:55 -- target/dif.sh@139 -- # create_transport 00:25:44.792 08:16:55 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:44.792 08:16:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.792 08:16:55 -- common/autotest_common.sh@10 -- # set +x 00:25:44.792 [2024-12-07 08:16:55.873259] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:44.792 08:16:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.792 08:16:55 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:44.792 08:16:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:44.792 08:16:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:44.792 08:16:55 -- common/autotest_common.sh@10 -- # set +x 00:25:44.792 ************************************ 00:25:44.792 START TEST fio_dif_1_default 00:25:44.792 ************************************ 00:25:44.792 08:16:55 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:25:44.792 08:16:55 -- target/dif.sh@86 -- # create_subsystems 0 00:25:44.792 08:16:55 -- target/dif.sh@28 -- # local sub 00:25:44.792 08:16:55 -- target/dif.sh@30 -- # for sub in "$@" 00:25:44.792 08:16:55 -- target/dif.sh@31 -- # create_subsystem 0 00:25:44.792 08:16:55 -- target/dif.sh@18 -- # local sub_id=0 00:25:44.792 08:16:55 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:44.792 08:16:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.792 08:16:55 -- common/autotest_common.sh@10 -- # set +x 00:25:44.792 bdev_null0 00:25:44.792 08:16:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.792 08:16:55 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:44.792 08:16:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.792 08:16:55 -- common/autotest_common.sh@10 -- # set +x 00:25:44.792 08:16:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.792 08:16:55 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:44.792 08:16:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.792 08:16:55 -- common/autotest_common.sh@10 -- # set +x 00:25:44.792 08:16:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.792 08:16:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:44.792 08:16:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.792 08:16:55 -- common/autotest_common.sh@10 -- # set +x 00:25:44.792 [2024-12-07 08:16:55.925390] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:44.792 08:16:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.792 08:16:55 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:44.792 08:16:55 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:44.792 08:16:55 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:44.792 08:16:55 -- nvmf/common.sh@520 -- # config=() 00:25:44.792 08:16:55 -- nvmf/common.sh@520 -- # local subsystem config 00:25:44.792 08:16:55 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:44.792 08:16:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:44.792 08:16:55 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:44.793 08:16:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:44.793 { 00:25:44.793 "params": { 00:25:44.793 "name": "Nvme$subsystem", 00:25:44.793 "trtype": "$TEST_TRANSPORT", 00:25:44.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.793 "adrfam": "ipv4", 00:25:44.793 "trsvcid": "$NVMF_PORT", 00:25:44.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.793 "hdgst": ${hdgst:-false}, 00:25:44.793 "ddgst": ${ddgst:-false} 00:25:44.793 }, 00:25:44.793 "method": "bdev_nvme_attach_controller" 00:25:44.793 } 00:25:44.793 EOF 00:25:44.793 )") 00:25:44.793 08:16:55 -- target/dif.sh@82 -- # gen_fio_conf 00:25:44.793 08:16:55 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:44.793 08:16:55 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:44.793 08:16:55 -- target/dif.sh@54 -- # local file 00:25:44.793 08:16:55 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:44.793 08:16:55 -- target/dif.sh@56 -- # cat 00:25:44.793 08:16:55 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:44.793 08:16:55 -- common/autotest_common.sh@1330 -- # shift 00:25:44.793 08:16:55 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:44.793 08:16:55 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:44.793 08:16:55 -- nvmf/common.sh@542 -- # cat 00:25:44.793 08:16:55 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:44.793 08:16:55 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:44.793 08:16:55 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:44.793 08:16:55 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:44.793 08:16:55 -- target/dif.sh@72 -- # (( file <= files )) 00:25:44.793 08:16:55 -- nvmf/common.sh@544 -- # jq . 00:25:44.793 08:16:55 -- nvmf/common.sh@545 -- # IFS=, 00:25:44.793 08:16:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:44.793 "params": { 00:25:44.793 "name": "Nvme0", 00:25:44.793 "trtype": "tcp", 00:25:44.793 "traddr": "10.0.0.2", 00:25:44.793 "adrfam": "ipv4", 00:25:44.793 "trsvcid": "4420", 00:25:44.793 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:44.793 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:44.793 "hdgst": false, 00:25:44.793 "ddgst": false 00:25:44.793 }, 00:25:44.793 "method": "bdev_nvme_attach_controller" 00:25:44.793 }' 00:25:44.793 08:16:55 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:44.793 08:16:55 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:44.793 08:16:55 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:44.793 08:16:55 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:44.793 08:16:55 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:44.793 08:16:55 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:44.793 08:16:55 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:44.793 08:16:55 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:44.793 08:16:55 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:44.793 08:16:55 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:45.059 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:45.059 fio-3.35 00:25:45.059 Starting 1 thread 00:25:45.318 [2024-12-07 08:16:56.557234] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:45.318 [2024-12-07 08:16:56.557328] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:57.518 00:25:57.518 filename0: (groupid=0, jobs=1): err= 0: pid=102216: Sat Dec 7 08:17:06 2024 00:25:57.518 read: IOPS=3868, BW=15.1MiB/s (15.8MB/s)(151MiB/10001msec) 00:25:57.518 slat (nsec): min=5810, max=43109, avg=7270.98, stdev=2873.73 00:25:57.518 clat (usec): min=337, max=42491, avg=1012.38, stdev=4960.68 00:25:57.518 lat (usec): min=342, max=42501, avg=1019.65, stdev=4960.76 00:25:57.518 clat percentiles (usec): 00:25:57.518 | 1.00th=[ 343], 5.00th=[ 351], 10.00th=[ 355], 20.00th=[ 367], 00:25:57.518 | 30.00th=[ 375], 40.00th=[ 383], 50.00th=[ 392], 60.00th=[ 400], 00:25:57.518 | 70.00th=[ 408], 80.00th=[ 420], 90.00th=[ 445], 95.00th=[ 474], 00:25:57.518 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:25:57.518 | 99.99th=[42730] 00:25:57.518 bw ( KiB/s): min= 3840, max=23424, per=99.80%, avg=15445.89, stdev=4573.60, samples=19 00:25:57.518 iops : min= 960, max= 5856, avg=3861.47, stdev=1143.40, samples=19 00:25:57.518 lat (usec) : 500=96.90%, 750=1.53%, 1000=0.04% 00:25:57.518 lat (msec) : 10=0.01%, 50=1.52% 00:25:57.518 cpu : usr=89.55%, sys=9.04%, ctx=12, majf=0, minf=0 00:25:57.518 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:57.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:57.518 issued rwts: total=38692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:57.518 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:57.518 00:25:57.518 Run status group 0 (all jobs): 00:25:57.518 READ: bw=15.1MiB/s (15.8MB/s), 15.1MiB/s-15.1MiB/s (15.8MB/s-15.8MB/s), io=151MiB (158MB), run=10001-10001msec 00:25:57.518 08:17:06 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:57.518 08:17:06 -- target/dif.sh@43 -- # local sub 00:25:57.518 08:17:06 -- target/dif.sh@45 -- # for sub in "$@" 00:25:57.519 08:17:06 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:57.519 08:17:06 -- target/dif.sh@36 -- # local sub_id=0 00:25:57.519 08:17:06 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:57.519 08:17:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.519 08:17:06 -- common/autotest_common.sh@10 -- # set +x 00:25:57.519 08:17:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.519 08:17:06 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:57.519 08:17:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.519 08:17:06 -- common/autotest_common.sh@10 -- # set +x 00:25:57.519 08:17:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.519 00:25:57.519 real 0m10.998s 00:25:57.519 user 0m9.606s 00:25:57.519 sys 0m1.162s 00:25:57.519 ************************************ 00:25:57.519 END TEST fio_dif_1_default 00:25:57.519 ************************************ 00:25:57.519 08:17:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:57.519 08:17:06 -- common/autotest_common.sh@10 -- # set +x 00:25:57.519 08:17:06 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:57.519 08:17:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:57.519 08:17:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:57.519 08:17:06 -- common/autotest_common.sh@10 -- # set +x 00:25:57.519 ************************************ 00:25:57.519 START TEST fio_dif_1_multi_subsystems 00:25:57.519 ************************************ 00:25:57.519 08:17:06 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:25:57.519 08:17:06 -- target/dif.sh@92 -- # local files=1 00:25:57.519 08:17:06 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:57.519 08:17:06 -- target/dif.sh@28 -- # local sub 00:25:57.519 08:17:06 -- target/dif.sh@30 -- # for sub in "$@" 00:25:57.519 08:17:06 -- target/dif.sh@31 -- # create_subsystem 0 00:25:57.519 08:17:06 -- target/dif.sh@18 -- # local sub_id=0 00:25:57.519 08:17:06 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:57.519 08:17:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.519 08:17:06 -- common/autotest_common.sh@10 -- # set +x 00:25:57.519 bdev_null0 00:25:57.519 08:17:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.519 08:17:06 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:57.519 08:17:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.519 08:17:06 -- common/autotest_common.sh@10 -- # set +x 00:25:57.519 08:17:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.519 08:17:06 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:57.519 08:17:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.519 08:17:06 -- common/autotest_common.sh@10 -- # set +x 00:25:57.519 08:17:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.519 08:17:06 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:57.519 08:17:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.519 08:17:06 -- common/autotest_common.sh@10 -- # set +x 00:25:57.519 [2024-12-07 08:17:06.973440] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.519 08:17:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.519 08:17:06 -- target/dif.sh@30 -- # for sub in "$@" 00:25:57.519 08:17:06 -- target/dif.sh@31 -- # create_subsystem 1 00:25:57.519 08:17:06 -- target/dif.sh@18 -- # local sub_id=1 00:25:57.519 08:17:06 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:57.519 08:17:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.519 08:17:06 -- common/autotest_common.sh@10 -- # set +x 00:25:57.519 bdev_null1 00:25:57.519 08:17:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.519 08:17:06 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:57.519 08:17:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.519 08:17:06 -- common/autotest_common.sh@10 -- # set +x 00:25:57.519 08:17:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.519 08:17:06 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:57.519 08:17:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.519 08:17:06 -- common/autotest_common.sh@10 -- # set +x 00:25:57.519 08:17:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.519 08:17:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:57.519 08:17:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.519 08:17:07 -- common/autotest_common.sh@10 -- # set +x 00:25:57.519 08:17:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.519 08:17:07 -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:57.519 08:17:07 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:57.519 08:17:07 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:57.519 08:17:07 -- nvmf/common.sh@520 -- # config=() 00:25:57.519 08:17:07 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:57.519 08:17:07 -- nvmf/common.sh@520 -- # local subsystem config 00:25:57.519 08:17:07 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:57.519 08:17:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:57.519 08:17:07 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:57.519 08:17:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:57.519 { 00:25:57.519 "params": { 00:25:57.519 "name": "Nvme$subsystem", 00:25:57.519 "trtype": "$TEST_TRANSPORT", 00:25:57.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.519 "adrfam": "ipv4", 00:25:57.519 "trsvcid": "$NVMF_PORT", 00:25:57.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.519 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.519 "hdgst": ${hdgst:-false}, 00:25:57.519 "ddgst": ${ddgst:-false} 00:25:57.519 }, 00:25:57.519 "method": "bdev_nvme_attach_controller" 00:25:57.519 } 00:25:57.519 EOF 00:25:57.519 )") 00:25:57.519 08:17:07 -- target/dif.sh@82 -- # gen_fio_conf 00:25:57.519 08:17:07 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:57.519 08:17:07 -- target/dif.sh@54 -- # local file 00:25:57.519 08:17:07 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:57.519 08:17:07 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:57.519 08:17:07 -- target/dif.sh@56 -- # cat 00:25:57.519 08:17:07 -- common/autotest_common.sh@1330 -- # shift 00:25:57.519 08:17:07 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:57.519 08:17:07 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:57.519 08:17:07 -- nvmf/common.sh@542 -- # cat 00:25:57.519 08:17:07 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:57.519 08:17:07 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:57.519 08:17:07 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:57.519 08:17:07 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:57.519 08:17:07 -- target/dif.sh@72 -- # (( file <= files )) 00:25:57.519 08:17:07 -- target/dif.sh@73 -- # cat 00:25:57.519 08:17:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:57.519 08:17:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:57.519 { 00:25:57.519 "params": { 00:25:57.519 "name": "Nvme$subsystem", 00:25:57.519 "trtype": "$TEST_TRANSPORT", 00:25:57.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:57.519 "adrfam": "ipv4", 00:25:57.519 "trsvcid": "$NVMF_PORT", 00:25:57.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:57.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:57.520 "hdgst": ${hdgst:-false}, 00:25:57.520 "ddgst": ${ddgst:-false} 00:25:57.520 }, 00:25:57.520 "method": "bdev_nvme_attach_controller" 00:25:57.520 } 00:25:57.520 EOF 00:25:57.520 )") 00:25:57.520 08:17:07 -- target/dif.sh@72 -- # (( file++ )) 00:25:57.520 08:17:07 -- nvmf/common.sh@542 -- # cat 00:25:57.520 08:17:07 -- target/dif.sh@72 -- # (( file <= files )) 00:25:57.520 08:17:07 -- nvmf/common.sh@544 -- # jq . 00:25:57.520 08:17:07 -- nvmf/common.sh@545 -- # IFS=, 00:25:57.520 08:17:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:57.520 "params": { 00:25:57.520 "name": "Nvme0", 00:25:57.520 "trtype": "tcp", 00:25:57.520 "traddr": "10.0.0.2", 00:25:57.520 "adrfam": "ipv4", 00:25:57.520 "trsvcid": "4420", 00:25:57.520 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:57.520 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:57.520 "hdgst": false, 00:25:57.520 "ddgst": false 00:25:57.520 }, 00:25:57.520 "method": "bdev_nvme_attach_controller" 00:25:57.520 },{ 00:25:57.520 "params": { 00:25:57.520 "name": "Nvme1", 00:25:57.520 "trtype": "tcp", 00:25:57.520 "traddr": "10.0.0.2", 00:25:57.520 "adrfam": "ipv4", 00:25:57.520 "trsvcid": "4420", 00:25:57.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:57.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:57.520 "hdgst": false, 00:25:57.520 "ddgst": false 00:25:57.520 }, 00:25:57.520 "method": "bdev_nvme_attach_controller" 00:25:57.520 }' 00:25:57.520 08:17:07 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:57.520 08:17:07 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:57.520 08:17:07 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:57.520 08:17:07 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:57.520 08:17:07 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:57.520 08:17:07 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:57.520 08:17:07 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:57.520 08:17:07 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:57.520 08:17:07 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:57.520 08:17:07 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:57.520 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:57.520 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:57.520 fio-3.35 00:25:57.520 Starting 2 threads 00:25:57.520 [2024-12-07 08:17:07.753158] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:57.520 [2024-12-07 08:17:07.753611] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:07.490 00:26:07.490 filename0: (groupid=0, jobs=1): err= 0: pid=102376: Sat Dec 7 08:17:17 2024 00:26:07.490 read: IOPS=205, BW=821KiB/s (841kB/s)(8240KiB/10037msec) 00:26:07.490 slat (nsec): min=5935, max=51831, avg=8917.02, stdev=4792.91 00:26:07.490 clat (usec): min=368, max=42461, avg=19459.97, stdev=20185.09 00:26:07.490 lat (usec): min=374, max=42472, avg=19468.89, stdev=20185.03 00:26:07.490 clat percentiles (usec): 00:26:07.490 | 1.00th=[ 375], 5.00th=[ 392], 10.00th=[ 404], 20.00th=[ 424], 00:26:07.490 | 30.00th=[ 449], 40.00th=[ 469], 50.00th=[ 685], 60.00th=[40633], 00:26:07.490 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:07.490 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:26:07.490 | 99.99th=[42206] 00:26:07.490 bw ( KiB/s): min= 576, max= 1376, per=49.97%, avg=822.30, stdev=193.18, samples=20 00:26:07.490 iops : min= 144, max= 344, avg=205.55, stdev=48.30, samples=20 00:26:07.490 lat (usec) : 500=45.68%, 750=5.63%, 1000=1.50% 00:26:07.490 lat (msec) : 2=0.19%, 50=46.99% 00:26:07.490 cpu : usr=97.04%, sys=2.51%, ctx=13, majf=0, minf=9 00:26:07.490 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:07.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.490 issued rwts: total=2060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.490 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:07.490 filename1: (groupid=0, jobs=1): err= 0: pid=102377: Sat Dec 7 08:17:17 2024 00:26:07.490 read: IOPS=206, BW=824KiB/s (844kB/s)(8272KiB/10036msec) 00:26:07.490 slat (nsec): min=5804, max=49992, avg=9153.46, stdev=4829.15 00:26:07.490 clat (usec): min=352, max=41522, avg=19381.96, stdev=20190.57 00:26:07.490 lat (usec): min=358, max=41531, avg=19391.11, stdev=20190.73 00:26:07.490 clat percentiles (usec): 00:26:07.490 | 1.00th=[ 371], 5.00th=[ 388], 10.00th=[ 400], 20.00th=[ 416], 00:26:07.491 | 30.00th=[ 433], 40.00th=[ 457], 50.00th=[ 529], 60.00th=[40633], 00:26:07.491 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:07.491 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:26:07.491 | 99.99th=[41681] 00:26:07.491 bw ( KiB/s): min= 640, max= 1120, per=50.15%, avg=825.50, stdev=107.22, samples=20 00:26:07.491 iops : min= 160, max= 280, avg=206.35, stdev=26.81, samples=20 00:26:07.491 lat (usec) : 500=47.97%, 750=3.53%, 1000=1.50% 00:26:07.491 lat (msec) : 2=0.19%, 50=46.81% 00:26:07.491 cpu : usr=97.02%, sys=2.56%, ctx=16, majf=0, minf=0 00:26:07.491 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:07.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.491 issued rwts: total=2068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.491 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:07.491 00:26:07.491 Run status group 0 (all jobs): 00:26:07.491 READ: bw=1645KiB/s (1685kB/s), 821KiB/s-824KiB/s (841kB/s-844kB/s), io=16.1MiB (16.9MB), run=10036-10037msec 00:26:07.491 08:17:18 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:07.491 08:17:18 -- target/dif.sh@43 -- # local sub 00:26:07.491 08:17:18 -- target/dif.sh@45 -- # for sub in "$@" 00:26:07.491 08:17:18 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:07.491 08:17:18 -- target/dif.sh@36 -- # local sub_id=0 00:26:07.491 08:17:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:07.491 08:17:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.491 08:17:18 -- common/autotest_common.sh@10 -- # set +x 00:26:07.491 08:17:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.491 08:17:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:07.491 08:17:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.491 08:17:18 -- common/autotest_common.sh@10 -- # set +x 00:26:07.491 08:17:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.491 08:17:18 -- target/dif.sh@45 -- # for sub in "$@" 00:26:07.491 08:17:18 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:07.491 08:17:18 -- target/dif.sh@36 -- # local sub_id=1 00:26:07.491 08:17:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:07.491 08:17:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.491 08:17:18 -- common/autotest_common.sh@10 -- # set +x 00:26:07.491 08:17:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.491 08:17:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:07.491 08:17:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.491 08:17:18 -- common/autotest_common.sh@10 -- # set +x 00:26:07.491 08:17:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.491 00:26:07.491 real 0m11.203s 00:26:07.491 user 0m20.255s 00:26:07.491 sys 0m0.804s 00:26:07.491 08:17:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:07.491 ************************************ 00:26:07.491 END TEST fio_dif_1_multi_subsystems 00:26:07.491 ************************************ 00:26:07.491 08:17:18 -- common/autotest_common.sh@10 -- # set +x 00:26:07.491 08:17:18 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:07.491 08:17:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:07.491 08:17:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:07.491 08:17:18 -- common/autotest_common.sh@10 -- # set +x 00:26:07.491 ************************************ 00:26:07.491 START TEST fio_dif_rand_params 00:26:07.491 ************************************ 00:26:07.491 08:17:18 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:26:07.491 08:17:18 -- target/dif.sh@100 -- # local NULL_DIF 00:26:07.491 08:17:18 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:07.491 08:17:18 -- target/dif.sh@103 -- # NULL_DIF=3 00:26:07.491 08:17:18 -- target/dif.sh@103 -- # bs=128k 00:26:07.491 08:17:18 -- target/dif.sh@103 -- # numjobs=3 00:26:07.491 08:17:18 -- target/dif.sh@103 -- # iodepth=3 00:26:07.491 08:17:18 -- target/dif.sh@103 -- # runtime=5 00:26:07.491 08:17:18 -- target/dif.sh@105 -- # create_subsystems 0 00:26:07.491 08:17:18 -- target/dif.sh@28 -- # local sub 00:26:07.491 08:17:18 -- target/dif.sh@30 -- # for sub in "$@" 00:26:07.491 08:17:18 -- target/dif.sh@31 -- # create_subsystem 0 00:26:07.491 08:17:18 -- target/dif.sh@18 -- # local sub_id=0 00:26:07.491 08:17:18 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:07.491 08:17:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.491 08:17:18 -- common/autotest_common.sh@10 -- # set +x 00:26:07.491 bdev_null0 00:26:07.491 08:17:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.491 08:17:18 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:07.491 08:17:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.491 08:17:18 -- common/autotest_common.sh@10 -- # set +x 00:26:07.491 08:17:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.491 08:17:18 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:07.491 08:17:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.491 08:17:18 -- common/autotest_common.sh@10 -- # set +x 00:26:07.491 08:17:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.491 08:17:18 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:07.491 08:17:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.491 08:17:18 -- common/autotest_common.sh@10 -- # set +x 00:26:07.491 [2024-12-07 08:17:18.234726] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.491 08:17:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.491 08:17:18 -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:07.491 08:17:18 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:07.491 08:17:18 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:07.491 08:17:18 -- nvmf/common.sh@520 -- # config=() 00:26:07.491 08:17:18 -- nvmf/common.sh@520 -- # local subsystem config 00:26:07.491 08:17:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:07.491 08:17:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:07.491 { 00:26:07.491 "params": { 00:26:07.491 "name": "Nvme$subsystem", 00:26:07.491 "trtype": "$TEST_TRANSPORT", 00:26:07.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:07.491 "adrfam": "ipv4", 00:26:07.491 "trsvcid": "$NVMF_PORT", 00:26:07.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:07.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:07.491 "hdgst": ${hdgst:-false}, 00:26:07.491 "ddgst": ${ddgst:-false} 00:26:07.491 }, 00:26:07.491 "method": "bdev_nvme_attach_controller" 00:26:07.491 } 00:26:07.491 EOF 00:26:07.491 )") 00:26:07.491 08:17:18 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:07.491 08:17:18 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:07.491 08:17:18 -- target/dif.sh@82 -- # gen_fio_conf 00:26:07.491 08:17:18 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:07.491 08:17:18 -- target/dif.sh@54 -- # local file 00:26:07.491 08:17:18 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:07.491 08:17:18 -- target/dif.sh@56 -- # cat 00:26:07.491 08:17:18 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:07.491 08:17:18 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:07.491 08:17:18 -- common/autotest_common.sh@1330 -- # shift 00:26:07.491 08:17:18 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:07.491 08:17:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:07.491 08:17:18 -- nvmf/common.sh@542 -- # cat 00:26:07.491 08:17:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:07.491 08:17:18 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:07.491 08:17:18 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:07.491 08:17:18 -- target/dif.sh@72 -- # (( file <= files )) 00:26:07.491 08:17:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:07.491 08:17:18 -- nvmf/common.sh@544 -- # jq . 00:26:07.491 08:17:18 -- nvmf/common.sh@545 -- # IFS=, 00:26:07.491 08:17:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:07.491 "params": { 00:26:07.491 "name": "Nvme0", 00:26:07.491 "trtype": "tcp", 00:26:07.491 "traddr": "10.0.0.2", 00:26:07.491 "adrfam": "ipv4", 00:26:07.491 "trsvcid": "4420", 00:26:07.491 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:07.491 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:07.491 "hdgst": false, 00:26:07.491 "ddgst": false 00:26:07.491 }, 00:26:07.491 "method": "bdev_nvme_attach_controller" 00:26:07.491 }' 00:26:07.491 08:17:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:07.491 08:17:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:07.491 08:17:18 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:07.491 08:17:18 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:07.491 08:17:18 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:07.491 08:17:18 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:07.491 08:17:18 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:07.491 08:17:18 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:07.491 08:17:18 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:07.491 08:17:18 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:07.491 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:07.491 ... 00:26:07.491 fio-3.35 00:26:07.491 Starting 3 threads 00:26:07.749 [2024-12-07 08:17:18.849143] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:07.749 [2024-12-07 08:17:18.849266] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:13.012 00:26:13.012 filename0: (groupid=0, jobs=1): err= 0: pid=102529: Sat Dec 7 08:17:23 2024 00:26:13.012 read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(164MiB/5005msec) 00:26:13.012 slat (nsec): min=6022, max=55861, avg=13380.63, stdev=5855.30 00:26:13.012 clat (usec): min=4729, max=51440, avg=11441.36, stdev=10234.31 00:26:13.012 lat (usec): min=4738, max=51455, avg=11454.74, stdev=10234.24 00:26:13.012 clat percentiles (usec): 00:26:13.012 | 1.00th=[ 5538], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 7635], 00:26:13.012 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9372], 00:26:13.012 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[47449], 00:26:13.012 | 99.00th=[50594], 99.50th=[51119], 99.90th=[51119], 99.95th=[51643], 00:26:13.012 | 99.99th=[51643] 00:26:13.012 bw ( KiB/s): min=23296, max=48737, per=31.72%, avg=33494.60, stdev=7408.63, samples=10 00:26:13.012 iops : min= 182, max= 380, avg=261.40, stdev=57.74, samples=10 00:26:13.012 lat (msec) : 10=81.15%, 20=11.98%, 50=5.50%, 100=1.37% 00:26:13.012 cpu : usr=93.75%, sys=4.78%, ctx=6, majf=0, minf=0 00:26:13.012 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.012 issued rwts: total=1310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.012 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:13.012 filename0: (groupid=0, jobs=1): err= 0: pid=102530: Sat Dec 7 08:17:23 2024 00:26:13.012 read: IOPS=248, BW=31.1MiB/s (32.6MB/s)(156MiB/5006msec) 00:26:13.012 slat (nsec): min=6213, max=59689, avg=14220.30, stdev=6065.93 00:26:13.012 clat (usec): min=3556, max=53968, avg=12041.00, stdev=9485.41 00:26:13.012 lat (usec): min=3566, max=53987, avg=12055.22, stdev=9485.75 00:26:13.012 clat percentiles (usec): 00:26:13.012 | 1.00th=[ 5604], 5.00th=[ 6194], 10.00th=[ 6587], 20.00th=[ 7308], 00:26:13.012 | 30.00th=[ 9241], 40.00th=[10159], 50.00th=[10552], 60.00th=[10814], 00:26:13.012 | 70.00th=[11207], 80.00th=[11731], 90.00th=[12387], 95.00th=[46924], 00:26:13.012 | 99.00th=[51643], 99.50th=[52167], 99.90th=[53740], 99.95th=[53740], 00:26:13.012 | 99.99th=[53740] 00:26:13.012 bw ( KiB/s): min=20224, max=40960, per=30.11%, avg=31795.20, stdev=7205.29, samples=10 00:26:13.012 iops : min= 158, max= 320, avg=248.40, stdev=56.29, samples=10 00:26:13.012 lat (msec) : 4=0.56%, 10=37.59%, 20=56.06%, 50=3.45%, 100=2.33% 00:26:13.012 cpu : usr=92.33%, sys=5.79%, ctx=9, majf=0, minf=9 00:26:13.012 IO depths : 1=3.2%, 2=96.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.012 issued rwts: total=1245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.012 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:13.012 filename0: (groupid=0, jobs=1): err= 0: pid=102531: Sat Dec 7 08:17:23 2024 00:26:13.012 read: IOPS=314, BW=39.3MiB/s (41.2MB/s)(197MiB/5005msec) 00:26:13.012 slat (nsec): min=6182, max=63159, avg=13915.73, stdev=6510.76 00:26:13.012 clat (usec): min=3095, max=55387, avg=9512.08, stdev=4277.94 00:26:13.012 lat (usec): min=3105, max=55393, avg=9526.00, stdev=4278.43 00:26:13.012 clat percentiles (usec): 00:26:13.012 | 1.00th=[ 3458], 5.00th=[ 3589], 10.00th=[ 3720], 20.00th=[ 6456], 00:26:13.012 | 30.00th=[ 7504], 40.00th=[ 8094], 50.00th=[ 9372], 60.00th=[11600], 00:26:13.012 | 70.00th=[12125], 80.00th=[12780], 90.00th=[13435], 95.00th=[13829], 00:26:13.012 | 99.00th=[14877], 99.50th=[15664], 99.90th=[54264], 99.95th=[55313], 00:26:13.012 | 99.99th=[55313] 00:26:13.012 bw ( KiB/s): min=29952, max=52992, per=38.14%, avg=40274.90, stdev=8243.56, samples=10 00:26:13.012 iops : min= 234, max= 414, avg=314.60, stdev=64.46, samples=10 00:26:13.012 lat (msec) : 4=15.11%, 10=37.21%, 20=47.30%, 50=0.19%, 100=0.19% 00:26:13.012 cpu : usr=93.98%, sys=4.46%, ctx=5, majf=0, minf=9 00:26:13.012 IO depths : 1=8.5%, 2=91.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:13.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.012 issued rwts: total=1575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.012 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:13.012 00:26:13.012 Run status group 0 (all jobs): 00:26:13.012 READ: bw=103MiB/s (108MB/s), 31.1MiB/s-39.3MiB/s (32.6MB/s-41.2MB/s), io=516MiB (541MB), run=5005-5006msec 00:26:13.012 08:17:24 -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:13.012 08:17:24 -- target/dif.sh@43 -- # local sub 00:26:13.012 08:17:24 -- target/dif.sh@45 -- # for sub in "$@" 00:26:13.012 08:17:24 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:13.012 08:17:24 -- target/dif.sh@36 -- # local sub_id=0 00:26:13.012 08:17:24 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:13.012 08:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.012 08:17:24 -- common/autotest_common.sh@10 -- # set +x 00:26:13.012 08:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.012 08:17:24 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:13.012 08:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.012 08:17:24 -- common/autotest_common.sh@10 -- # set +x 00:26:13.012 08:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.012 08:17:24 -- target/dif.sh@109 -- # NULL_DIF=2 00:26:13.012 08:17:24 -- target/dif.sh@109 -- # bs=4k 00:26:13.012 08:17:24 -- target/dif.sh@109 -- # numjobs=8 00:26:13.012 08:17:24 -- target/dif.sh@109 -- # iodepth=16 00:26:13.012 08:17:24 -- target/dif.sh@109 -- # runtime= 00:26:13.012 08:17:24 -- target/dif.sh@109 -- # files=2 00:26:13.012 08:17:24 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:13.012 08:17:24 -- target/dif.sh@28 -- # local sub 00:26:13.012 08:17:24 -- target/dif.sh@30 -- # for sub in "$@" 00:26:13.012 08:17:24 -- target/dif.sh@31 -- # create_subsystem 0 00:26:13.012 08:17:24 -- target/dif.sh@18 -- # local sub_id=0 00:26:13.012 08:17:24 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:13.012 08:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.012 08:17:24 -- common/autotest_common.sh@10 -- # set +x 00:26:13.012 bdev_null0 00:26:13.012 08:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.012 08:17:24 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:13.012 08:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.012 08:17:24 -- common/autotest_common.sh@10 -- # set +x 00:26:13.012 08:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.012 08:17:24 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:13.012 08:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.012 08:17:24 -- common/autotest_common.sh@10 -- # set +x 00:26:13.013 08:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.013 08:17:24 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:13.013 08:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.013 08:17:24 -- common/autotest_common.sh@10 -- # set +x 00:26:13.013 [2024-12-07 08:17:24.243961] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.013 08:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.013 08:17:24 -- target/dif.sh@30 -- # for sub in "$@" 00:26:13.013 08:17:24 -- target/dif.sh@31 -- # create_subsystem 1 00:26:13.013 08:17:24 -- target/dif.sh@18 -- # local sub_id=1 00:26:13.013 08:17:24 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:13.013 08:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.013 08:17:24 -- common/autotest_common.sh@10 -- # set +x 00:26:13.013 bdev_null1 00:26:13.013 08:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.013 08:17:24 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:13.013 08:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.013 08:17:24 -- common/autotest_common.sh@10 -- # set +x 00:26:13.013 08:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.013 08:17:24 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:13.013 08:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.013 08:17:24 -- common/autotest_common.sh@10 -- # set +x 00:26:13.013 08:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.013 08:17:24 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.013 08:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.013 08:17:24 -- common/autotest_common.sh@10 -- # set +x 00:26:13.013 08:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.013 08:17:24 -- target/dif.sh@30 -- # for sub in "$@" 00:26:13.013 08:17:24 -- target/dif.sh@31 -- # create_subsystem 2 00:26:13.013 08:17:24 -- target/dif.sh@18 -- # local sub_id=2 00:26:13.013 08:17:24 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:13.013 08:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.013 08:17:24 -- common/autotest_common.sh@10 -- # set +x 00:26:13.271 bdev_null2 00:26:13.271 08:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.271 08:17:24 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:13.271 08:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.271 08:17:24 -- common/autotest_common.sh@10 -- # set +x 00:26:13.271 08:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.271 08:17:24 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:13.271 08:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.271 08:17:24 -- common/autotest_common.sh@10 -- # set +x 00:26:13.271 08:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.271 08:17:24 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:13.271 08:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.271 08:17:24 -- common/autotest_common.sh@10 -- # set +x 00:26:13.272 08:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.272 08:17:24 -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:13.272 08:17:24 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:13.272 08:17:24 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:13.272 08:17:24 -- nvmf/common.sh@520 -- # config=() 00:26:13.272 08:17:24 -- nvmf/common.sh@520 -- # local subsystem config 00:26:13.272 08:17:24 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.272 08:17:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:13.272 08:17:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:13.272 { 00:26:13.272 "params": { 00:26:13.272 "name": "Nvme$subsystem", 00:26:13.272 "trtype": "$TEST_TRANSPORT", 00:26:13.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.272 "adrfam": "ipv4", 00:26:13.272 "trsvcid": "$NVMF_PORT", 00:26:13.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.272 "hdgst": ${hdgst:-false}, 00:26:13.272 "ddgst": ${ddgst:-false} 00:26:13.272 }, 00:26:13.272 "method": "bdev_nvme_attach_controller" 00:26:13.272 } 00:26:13.272 EOF 00:26:13.272 )") 00:26:13.272 08:17:24 -- target/dif.sh@82 -- # gen_fio_conf 00:26:13.272 08:17:24 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.272 08:17:24 -- target/dif.sh@54 -- # local file 00:26:13.272 08:17:24 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:13.272 08:17:24 -- target/dif.sh@56 -- # cat 00:26:13.272 08:17:24 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:13.272 08:17:24 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:13.272 08:17:24 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:13.272 08:17:24 -- common/autotest_common.sh@1330 -- # shift 00:26:13.272 08:17:24 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:13.272 08:17:24 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:13.272 08:17:24 -- nvmf/common.sh@542 -- # cat 00:26:13.272 08:17:24 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:13.272 08:17:24 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:13.272 08:17:24 -- target/dif.sh@72 -- # (( file <= files )) 00:26:13.272 08:17:24 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:13.272 08:17:24 -- target/dif.sh@73 -- # cat 00:26:13.272 08:17:24 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:13.272 08:17:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:13.272 08:17:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:13.272 { 00:26:13.272 "params": { 00:26:13.272 "name": "Nvme$subsystem", 00:26:13.272 "trtype": "$TEST_TRANSPORT", 00:26:13.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.272 "adrfam": "ipv4", 00:26:13.272 "trsvcid": "$NVMF_PORT", 00:26:13.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.272 "hdgst": ${hdgst:-false}, 00:26:13.272 "ddgst": ${ddgst:-false} 00:26:13.272 }, 00:26:13.272 "method": "bdev_nvme_attach_controller" 00:26:13.272 } 00:26:13.272 EOF 00:26:13.272 )") 00:26:13.272 08:17:24 -- nvmf/common.sh@542 -- # cat 00:26:13.272 08:17:24 -- target/dif.sh@72 -- # (( file++ )) 00:26:13.272 08:17:24 -- target/dif.sh@72 -- # (( file <= files )) 00:26:13.272 08:17:24 -- target/dif.sh@73 -- # cat 00:26:13.272 08:17:24 -- target/dif.sh@72 -- # (( file++ )) 00:26:13.272 08:17:24 -- target/dif.sh@72 -- # (( file <= files )) 00:26:13.272 08:17:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:13.272 08:17:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:13.272 { 00:26:13.272 "params": { 00:26:13.272 "name": "Nvme$subsystem", 00:26:13.272 "trtype": "$TEST_TRANSPORT", 00:26:13.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.272 "adrfam": "ipv4", 00:26:13.272 "trsvcid": "$NVMF_PORT", 00:26:13.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.272 "hdgst": ${hdgst:-false}, 00:26:13.272 "ddgst": ${ddgst:-false} 00:26:13.272 }, 00:26:13.272 "method": "bdev_nvme_attach_controller" 00:26:13.272 } 00:26:13.272 EOF 00:26:13.272 )") 00:26:13.272 08:17:24 -- nvmf/common.sh@542 -- # cat 00:26:13.272 08:17:24 -- nvmf/common.sh@544 -- # jq . 00:26:13.272 08:17:24 -- nvmf/common.sh@545 -- # IFS=, 00:26:13.272 08:17:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:13.272 "params": { 00:26:13.272 "name": "Nvme0", 00:26:13.272 "trtype": "tcp", 00:26:13.272 "traddr": "10.0.0.2", 00:26:13.272 "adrfam": "ipv4", 00:26:13.272 "trsvcid": "4420", 00:26:13.272 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:13.272 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:13.272 "hdgst": false, 00:26:13.272 "ddgst": false 00:26:13.272 }, 00:26:13.272 "method": "bdev_nvme_attach_controller" 00:26:13.272 },{ 00:26:13.272 "params": { 00:26:13.272 "name": "Nvme1", 00:26:13.272 "trtype": "tcp", 00:26:13.272 "traddr": "10.0.0.2", 00:26:13.272 "adrfam": "ipv4", 00:26:13.272 "trsvcid": "4420", 00:26:13.272 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:13.272 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:13.272 "hdgst": false, 00:26:13.272 "ddgst": false 00:26:13.272 }, 00:26:13.272 "method": "bdev_nvme_attach_controller" 00:26:13.272 },{ 00:26:13.272 "params": { 00:26:13.272 "name": "Nvme2", 00:26:13.272 "trtype": "tcp", 00:26:13.272 "traddr": "10.0.0.2", 00:26:13.272 "adrfam": "ipv4", 00:26:13.272 "trsvcid": "4420", 00:26:13.272 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:13.272 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:13.272 "hdgst": false, 00:26:13.272 "ddgst": false 00:26:13.272 }, 00:26:13.272 "method": "bdev_nvme_attach_controller" 00:26:13.272 }' 00:26:13.272 08:17:24 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:13.272 08:17:24 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:13.272 08:17:24 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:13.272 08:17:24 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:13.272 08:17:24 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:13.272 08:17:24 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:13.272 08:17:24 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:13.272 08:17:24 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:13.272 08:17:24 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:13.272 08:17:24 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:13.272 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:13.272 ... 00:26:13.272 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:13.272 ... 00:26:13.272 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:13.272 ... 00:26:13.272 fio-3.35 00:26:13.272 Starting 24 threads 00:26:14.205 [2024-12-07 08:17:25.145604] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:14.205 [2024-12-07 08:17:25.145678] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:24.178 00:26:24.178 filename0: (groupid=0, jobs=1): err= 0: pid=102631: Sat Dec 7 08:17:35 2024 00:26:24.178 read: IOPS=286, BW=1145KiB/s (1172kB/s)(11.3MiB/10076msec) 00:26:24.178 slat (usec): min=4, max=8026, avg=14.61, stdev=150.55 00:26:24.178 clat (msec): min=2, max=167, avg=55.71, stdev=21.44 00:26:24.178 lat (msec): min=2, max=167, avg=55.72, stdev=21.45 00:26:24.178 clat percentiles (msec): 00:26:24.178 | 1.00th=[ 5], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 40], 00:26:24.178 | 30.00th=[ 44], 40.00th=[ 47], 50.00th=[ 52], 60.00th=[ 58], 00:26:24.178 | 70.00th=[ 64], 80.00th=[ 73], 90.00th=[ 88], 95.00th=[ 96], 00:26:24.178 | 99.00th=[ 109], 99.50th=[ 126], 99.90th=[ 167], 99.95th=[ 167], 00:26:24.178 | 99.99th=[ 167] 00:26:24.178 bw ( KiB/s): min= 728, max= 1920, per=4.97%, avg=1147.20, stdev=292.45, samples=20 00:26:24.178 iops : min= 182, max= 480, avg=286.80, stdev=73.11, samples=20 00:26:24.178 lat (msec) : 4=0.90%, 10=1.87%, 20=0.55%, 50=44.45%, 100=49.24% 00:26:24.178 lat (msec) : 250=2.98% 00:26:24.178 cpu : usr=46.04%, sys=0.72%, ctx=1379, majf=0, minf=0 00:26:24.178 IO depths : 1=1.2%, 2=2.4%, 4=9.4%, 8=74.6%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:24.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.178 complete : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.178 issued rwts: total=2884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.178 filename0: (groupid=0, jobs=1): err= 0: pid=102632: Sat Dec 7 08:17:35 2024 00:26:24.178 read: IOPS=225, BW=900KiB/s (922kB/s)(9032KiB/10032msec) 00:26:24.178 slat (usec): min=3, max=8018, avg=18.35, stdev=185.71 00:26:24.178 clat (msec): min=27, max=141, avg=70.94, stdev=17.47 00:26:24.178 lat (msec): min=27, max=141, avg=70.96, stdev=17.46 00:26:24.178 clat percentiles (msec): 00:26:24.178 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:26:24.178 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 72], 00:26:24.178 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 102], 00:26:24.178 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 142], 99.95th=[ 142], 00:26:24.178 | 99.99th=[ 142] 00:26:24.178 bw ( KiB/s): min= 763, max= 1072, per=3.88%, avg=895.35, stdev=90.72, samples=20 00:26:24.178 iops : min= 190, max= 268, avg=223.80, stdev=22.74, samples=20 00:26:24.178 lat (msec) : 50=11.91%, 100=82.86%, 250=5.23% 00:26:24.178 cpu : usr=37.82%, sys=0.43%, ctx=1020, majf=0, minf=9 00:26:24.178 IO depths : 1=2.7%, 2=5.8%, 4=15.4%, 8=65.7%, 16=10.3%, 32=0.0%, >=64=0.0% 00:26:24.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.178 complete : 0=0.0%, 4=91.5%, 8=3.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.178 issued rwts: total=2258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.178 filename0: (groupid=0, jobs=1): err= 0: pid=102633: Sat Dec 7 08:17:35 2024 00:26:24.178 read: IOPS=215, BW=863KiB/s (884kB/s)(8640KiB/10009msec) 00:26:24.178 slat (usec): min=4, max=8040, avg=23.69, stdev=272.87 00:26:24.178 clat (msec): min=32, max=175, avg=73.95, stdev=22.37 00:26:24.178 lat (msec): min=32, max=175, avg=73.97, stdev=22.37 00:26:24.178 clat percentiles (msec): 00:26:24.178 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 52], 20.00th=[ 58], 00:26:24.178 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 75], 00:26:24.178 | 70.00th=[ 83], 80.00th=[ 90], 90.00th=[ 101], 95.00th=[ 115], 00:26:24.178 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 176], 99.95th=[ 176], 00:26:24.178 | 99.99th=[ 176] 00:26:24.178 bw ( KiB/s): min= 512, max= 1152, per=3.73%, avg=861.47, stdev=165.29, samples=19 00:26:24.178 iops : min= 128, max= 288, avg=215.37, stdev=41.32, samples=19 00:26:24.178 lat (msec) : 50=9.63%, 100=79.91%, 250=10.46% 00:26:24.178 cpu : usr=37.16%, sys=0.42%, ctx=1017, majf=0, minf=9 00:26:24.178 IO depths : 1=2.3%, 2=5.4%, 4=15.1%, 8=66.5%, 16=10.7%, 32=0.0%, >=64=0.0% 00:26:24.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.178 complete : 0=0.0%, 4=91.4%, 8=3.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.178 issued rwts: total=2160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.178 filename0: (groupid=0, jobs=1): err= 0: pid=102634: Sat Dec 7 08:17:35 2024 00:26:24.178 read: IOPS=239, BW=956KiB/s (979kB/s)(9612KiB/10050msec) 00:26:24.178 slat (usec): min=4, max=9021, avg=25.49, stdev=337.22 00:26:24.178 clat (msec): min=25, max=143, avg=66.64, stdev=19.86 00:26:24.178 lat (msec): min=25, max=143, avg=66.67, stdev=19.87 00:26:24.178 clat percentiles (msec): 00:26:24.178 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 48], 00:26:24.178 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 71], 00:26:24.178 | 70.00th=[ 73], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 96], 00:26:24.178 | 99.00th=[ 122], 99.50th=[ 130], 99.90th=[ 132], 99.95th=[ 132], 00:26:24.178 | 99.99th=[ 144] 00:26:24.178 bw ( KiB/s): min= 688, max= 1280, per=4.14%, avg=956.15, stdev=158.32, samples=20 00:26:24.178 iops : min= 172, max= 320, avg=239.00, stdev=39.61, samples=20 00:26:24.178 lat (msec) : 50=23.55%, 100=72.78%, 250=3.66% 00:26:24.178 cpu : usr=32.67%, sys=0.40%, ctx=884, majf=0, minf=9 00:26:24.178 IO depths : 1=0.9%, 2=1.9%, 4=8.3%, 8=76.1%, 16=12.8%, 32=0.0%, >=64=0.0% 00:26:24.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.178 complete : 0=0.0%, 4=89.5%, 8=6.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.178 issued rwts: total=2403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.178 filename0: (groupid=0, jobs=1): err= 0: pid=102635: Sat Dec 7 08:17:35 2024 00:26:24.178 read: IOPS=270, BW=1084KiB/s (1110kB/s)(10.6MiB/10062msec) 00:26:24.178 slat (usec): min=5, max=8070, avg=18.68, stdev=228.62 00:26:24.178 clat (msec): min=18, max=179, avg=58.86, stdev=19.53 00:26:24.178 lat (msec): min=18, max=179, avg=58.88, stdev=19.53 00:26:24.178 clat percentiles (msec): 00:26:24.178 | 1.00th=[ 25], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 45], 00:26:24.178 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 59], 60.00th=[ 61], 00:26:24.178 | 70.00th=[ 66], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 96], 00:26:24.178 | 99.00th=[ 112], 99.50th=[ 150], 99.90th=[ 180], 99.95th=[ 180], 00:26:24.178 | 99.99th=[ 180] 00:26:24.179 bw ( KiB/s): min= 600, max= 1296, per=4.70%, avg=1084.00, stdev=189.22, samples=20 00:26:24.179 iops : min= 150, max= 324, avg=271.00, stdev=47.30, samples=20 00:26:24.179 lat (msec) : 20=0.59%, 50=39.40%, 100=56.82%, 250=3.19% 00:26:24.179 cpu : usr=38.41%, sys=0.61%, ctx=1101, majf=0, minf=9 00:26:24.179 IO depths : 1=0.8%, 2=1.9%, 4=8.1%, 8=76.0%, 16=13.2%, 32=0.0%, >=64=0.0% 00:26:24.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.179 complete : 0=0.0%, 4=89.6%, 8=6.3%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.179 issued rwts: total=2726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.179 filename0: (groupid=0, jobs=1): err= 0: pid=102636: Sat Dec 7 08:17:35 2024 00:26:24.179 read: IOPS=234, BW=940KiB/s (962kB/s)(9416KiB/10019msec) 00:26:24.179 slat (usec): min=4, max=8042, avg=20.97, stdev=248.20 00:26:24.179 clat (msec): min=29, max=134, avg=67.93, stdev=20.22 00:26:24.179 lat (msec): min=29, max=134, avg=67.95, stdev=20.23 00:26:24.179 clat percentiles (msec): 00:26:24.179 | 1.00th=[ 34], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 50], 00:26:24.179 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 69], 00:26:24.179 | 70.00th=[ 77], 80.00th=[ 88], 90.00th=[ 97], 95.00th=[ 107], 00:26:24.179 | 99.00th=[ 127], 99.50th=[ 129], 99.90th=[ 136], 99.95th=[ 136], 00:26:24.179 | 99.99th=[ 136] 00:26:24.179 bw ( KiB/s): min= 584, max= 1280, per=4.05%, avg=935.10, stdev=169.73, samples=20 00:26:24.179 iops : min= 146, max= 320, avg=233.75, stdev=42.42, samples=20 00:26:24.179 lat (msec) : 50=20.18%, 100=72.13%, 250=7.69% 00:26:24.179 cpu : usr=42.58%, sys=0.63%, ctx=1329, majf=0, minf=9 00:26:24.179 IO depths : 1=2.2%, 2=5.1%, 4=14.7%, 8=67.0%, 16=11.0%, 32=0.0%, >=64=0.0% 00:26:24.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.179 complete : 0=0.0%, 4=91.4%, 8=3.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.179 issued rwts: total=2354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.179 filename0: (groupid=0, jobs=1): err= 0: pid=102637: Sat Dec 7 08:17:35 2024 00:26:24.179 read: IOPS=220, BW=882KiB/s (903kB/s)(8836KiB/10022msec) 00:26:24.179 slat (usec): min=4, max=8018, avg=21.57, stdev=255.55 00:26:24.179 clat (msec): min=33, max=144, avg=72.48, stdev=20.01 00:26:24.179 lat (msec): min=33, max=144, avg=72.50, stdev=20.02 00:26:24.179 clat percentiles (msec): 00:26:24.179 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:26:24.179 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 73], 00:26:24.179 | 70.00th=[ 80], 80.00th=[ 87], 90.00th=[ 104], 95.00th=[ 108], 00:26:24.179 | 99.00th=[ 138], 99.50th=[ 138], 99.90th=[ 146], 99.95th=[ 146], 00:26:24.179 | 99.99th=[ 146] 00:26:24.179 bw ( KiB/s): min= 638, max= 1072, per=3.80%, avg=876.75, stdev=128.40, samples=20 00:26:24.179 iops : min= 159, max= 268, avg=219.15, stdev=32.15, samples=20 00:26:24.179 lat (msec) : 50=12.22%, 100=76.41%, 250=11.36% 00:26:24.179 cpu : usr=33.82%, sys=0.42%, ctx=923, majf=0, minf=9 00:26:24.179 IO depths : 1=1.5%, 2=3.5%, 4=11.9%, 8=70.8%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:24.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.179 complete : 0=0.0%, 4=90.3%, 8=5.4%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.179 issued rwts: total=2209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.179 filename0: (groupid=0, jobs=1): err= 0: pid=102638: Sat Dec 7 08:17:35 2024 00:26:24.179 read: IOPS=233, BW=935KiB/s (957kB/s)(9372KiB/10028msec) 00:26:24.179 slat (usec): min=4, max=8033, avg=28.19, stdev=341.17 00:26:24.179 clat (msec): min=23, max=144, avg=68.32, stdev=19.16 00:26:24.179 lat (msec): min=23, max=144, avg=68.35, stdev=19.16 00:26:24.179 clat percentiles (msec): 00:26:24.179 | 1.00th=[ 33], 5.00th=[ 38], 10.00th=[ 47], 20.00th=[ 53], 00:26:24.179 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 71], 00:26:24.179 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 102], 00:26:24.179 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 146], 00:26:24.179 | 99.99th=[ 146] 00:26:24.179 bw ( KiB/s): min= 682, max= 1200, per=4.03%, avg=929.85, stdev=144.93, samples=20 00:26:24.179 iops : min= 170, max= 300, avg=232.40, stdev=36.32, samples=20 00:26:24.179 lat (msec) : 50=18.05%, 100=76.48%, 250=5.46% 00:26:24.179 cpu : usr=36.48%, sys=0.42%, ctx=976, majf=0, minf=9 00:26:24.179 IO depths : 1=1.7%, 2=4.0%, 4=12.3%, 8=70.4%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:24.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.179 complete : 0=0.0%, 4=90.7%, 8=4.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.179 issued rwts: total=2343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.179 filename1: (groupid=0, jobs=1): err= 0: pid=102639: Sat Dec 7 08:17:35 2024 00:26:24.179 read: IOPS=256, BW=1027KiB/s (1052kB/s)(10.1MiB/10049msec) 00:26:24.179 slat (usec): min=6, max=8021, avg=20.05, stdev=236.62 00:26:24.179 clat (msec): min=26, max=139, avg=62.10, stdev=19.49 00:26:24.179 lat (msec): min=26, max=139, avg=62.12, stdev=19.49 00:26:24.179 clat percentiles (msec): 00:26:24.179 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 46], 00:26:24.179 | 30.00th=[ 49], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 65], 00:26:24.179 | 70.00th=[ 70], 80.00th=[ 79], 90.00th=[ 85], 95.00th=[ 103], 00:26:24.179 | 99.00th=[ 118], 99.50th=[ 133], 99.90th=[ 140], 99.95th=[ 140], 00:26:24.179 | 99.99th=[ 140] 00:26:24.179 bw ( KiB/s): min= 720, max= 1296, per=4.44%, avg=1025.45, stdev=172.86, samples=20 00:26:24.179 iops : min= 180, max= 324, avg=256.35, stdev=43.23, samples=20 00:26:24.179 lat (msec) : 50=31.78%, 100=62.87%, 250=5.35% 00:26:24.179 cpu : usr=36.81%, sys=0.57%, ctx=1046, majf=0, minf=9 00:26:24.179 IO depths : 1=0.5%, 2=1.8%, 4=9.0%, 8=75.4%, 16=13.3%, 32=0.0%, >=64=0.0% 00:26:24.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.179 complete : 0=0.0%, 4=89.8%, 8=5.9%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.179 issued rwts: total=2580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.179 filename1: (groupid=0, jobs=1): err= 0: pid=102640: Sat Dec 7 08:17:35 2024 00:26:24.179 read: IOPS=262, BW=1050KiB/s (1075kB/s)(10.3MiB/10077msec) 00:26:24.179 slat (usec): min=6, max=8036, avg=15.46, stdev=156.30 00:26:24.179 clat (msec): min=2, max=123, avg=60.76, stdev=19.04 00:26:24.179 lat (msec): min=2, max=123, avg=60.78, stdev=19.04 00:26:24.179 clat percentiles (msec): 00:26:24.179 | 1.00th=[ 4], 5.00th=[ 34], 10.00th=[ 41], 20.00th=[ 47], 00:26:24.179 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 64], 00:26:24.179 | 70.00th=[ 69], 80.00th=[ 74], 90.00th=[ 84], 95.00th=[ 94], 00:26:24.179 | 99.00th=[ 106], 99.50th=[ 116], 99.90th=[ 125], 99.95th=[ 125], 00:26:24.179 | 99.99th=[ 125] 00:26:24.179 bw ( KiB/s): min= 848, max= 1664, per=4.56%, avg=1051.20, stdev=190.81, samples=20 00:26:24.179 iops : min= 212, max= 416, avg=262.80, stdev=47.70, samples=20 00:26:24.179 lat (msec) : 4=1.25%, 10=2.38%, 50=21.86%, 100=72.62%, 250=1.89% 00:26:24.179 cpu : usr=43.42%, sys=0.58%, ctx=1201, majf=0, minf=0 00:26:24.179 IO depths : 1=1.2%, 2=2.5%, 4=8.9%, 8=74.9%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:24.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.179 complete : 0=0.0%, 4=89.8%, 8=5.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.179 issued rwts: total=2644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.179 filename1: (groupid=0, jobs=1): err= 0: pid=102641: Sat Dec 7 08:17:35 2024 00:26:24.179 read: IOPS=257, BW=1029KiB/s (1054kB/s)(10.1MiB/10002msec) 00:26:24.179 slat (usec): min=3, max=4022, avg=13.48, stdev=79.43 00:26:24.179 clat (msec): min=8, max=163, avg=62.08, stdev=24.15 00:26:24.179 lat (msec): min=8, max=163, avg=62.09, stdev=24.15 00:26:24.179 clat percentiles (msec): 00:26:24.179 | 1.00th=[ 11], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 44], 00:26:24.179 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 58], 60.00th=[ 63], 00:26:24.179 | 70.00th=[ 68], 80.00th=[ 79], 90.00th=[ 101], 95.00th=[ 111], 00:26:24.179 | 99.00th=[ 134], 99.50th=[ 144], 99.90th=[ 165], 99.95th=[ 165], 00:26:24.179 | 99.99th=[ 165] 00:26:24.179 bw ( KiB/s): min= 560, max= 1424, per=4.41%, avg=1018.21, stdev=250.53, samples=19 00:26:24.179 iops : min= 140, max= 356, avg=254.53, stdev=62.60, samples=19 00:26:24.179 lat (msec) : 10=0.62%, 20=1.24%, 50=33.10%, 100=55.44%, 250=9.60% 00:26:24.179 cpu : usr=42.90%, sys=0.58%, ctx=1572, majf=0, minf=9 00:26:24.179 IO depths : 1=1.4%, 2=2.9%, 4=9.5%, 8=73.5%, 16=12.8%, 32=0.0%, >=64=0.0% 00:26:24.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.179 complete : 0=0.0%, 4=90.1%, 8=5.8%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.179 issued rwts: total=2574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.179 filename1: (groupid=0, jobs=1): err= 0: pid=102643: Sat Dec 7 08:17:35 2024 00:26:24.179 read: IOPS=236, BW=946KiB/s (969kB/s)(9504KiB/10047msec) 00:26:24.179 slat (usec): min=4, max=8023, avg=25.71, stdev=328.28 00:26:24.179 clat (msec): min=23, max=134, avg=67.52, stdev=18.32 00:26:24.180 lat (msec): min=23, max=134, avg=67.54, stdev=18.32 00:26:24.180 clat percentiles (msec): 00:26:24.180 | 1.00th=[ 35], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 51], 00:26:24.180 | 30.00th=[ 61], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 71], 00:26:24.180 | 70.00th=[ 72], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 105], 00:26:24.180 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 134], 99.95th=[ 134], 00:26:24.180 | 99.99th=[ 134] 00:26:24.180 bw ( KiB/s): min= 744, max= 1096, per=4.08%, avg=941.15, stdev=108.61, samples=20 00:26:24.180 iops : min= 186, max= 274, avg=235.25, stdev=27.14, samples=20 00:26:24.180 lat (msec) : 50=20.20%, 100=73.61%, 250=6.19% 00:26:24.180 cpu : usr=32.66%, sys=0.39%, ctx=866, majf=0, minf=9 00:26:24.180 IO depths : 1=1.0%, 2=2.4%, 4=9.9%, 8=74.5%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:24.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.180 complete : 0=0.0%, 4=90.2%, 8=4.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.180 issued rwts: total=2376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.180 filename1: (groupid=0, jobs=1): err= 0: pid=102644: Sat Dec 7 08:17:35 2024 00:26:24.180 read: IOPS=235, BW=943KiB/s (966kB/s)(9448KiB/10014msec) 00:26:24.180 slat (usec): min=3, max=12027, avg=27.21, stdev=330.95 00:26:24.180 clat (msec): min=32, max=179, avg=67.63, stdev=21.08 00:26:24.180 lat (msec): min=32, max=179, avg=67.65, stdev=21.08 00:26:24.180 clat percentiles (msec): 00:26:24.180 | 1.00th=[ 34], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 52], 00:26:24.180 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 68], 00:26:24.180 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 104], 00:26:24.180 | 99.00th=[ 136], 99.50th=[ 148], 99.90th=[ 180], 99.95th=[ 180], 00:26:24.180 | 99.99th=[ 180] 00:26:24.180 bw ( KiB/s): min= 512, max= 1200, per=4.01%, avg=924.74, stdev=171.08, samples=19 00:26:24.180 iops : min= 128, max= 300, avg=231.16, stdev=42.79, samples=19 00:26:24.180 lat (msec) : 50=18.12%, 100=73.71%, 250=8.17% 00:26:24.180 cpu : usr=44.57%, sys=0.60%, ctx=1219, majf=0, minf=9 00:26:24.180 IO depths : 1=2.2%, 2=4.8%, 4=13.8%, 8=68.1%, 16=11.1%, 32=0.0%, >=64=0.0% 00:26:24.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.180 complete : 0=0.0%, 4=90.9%, 8=4.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.180 issued rwts: total=2362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.180 filename1: (groupid=0, jobs=1): err= 0: pid=102645: Sat Dec 7 08:17:35 2024 00:26:24.180 read: IOPS=261, BW=1046KiB/s (1071kB/s)(10.2MiB/10019msec) 00:26:24.180 slat (usec): min=3, max=8040, avg=16.54, stdev=195.97 00:26:24.180 clat (msec): min=18, max=132, avg=61.07, stdev=18.50 00:26:24.180 lat (msec): min=19, max=132, avg=61.09, stdev=18.50 00:26:24.180 clat percentiles (msec): 00:26:24.180 | 1.00th=[ 30], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 45], 00:26:24.180 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 64], 00:26:24.180 | 70.00th=[ 71], 80.00th=[ 74], 90.00th=[ 85], 95.00th=[ 96], 00:26:24.180 | 99.00th=[ 110], 99.50th=[ 122], 99.90th=[ 133], 99.95th=[ 133], 00:26:24.180 | 99.99th=[ 133] 00:26:24.180 bw ( KiB/s): min= 816, max= 1392, per=4.51%, avg=1041.45, stdev=172.22, samples=20 00:26:24.180 iops : min= 204, max= 348, avg=260.35, stdev=43.06, samples=20 00:26:24.180 lat (msec) : 20=0.23%, 50=31.56%, 100=65.11%, 250=3.09% 00:26:24.180 cpu : usr=38.71%, sys=0.52%, ctx=1039, majf=0, minf=9 00:26:24.180 IO depths : 1=1.1%, 2=2.5%, 4=9.2%, 8=74.6%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:24.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.180 complete : 0=0.0%, 4=90.0%, 8=5.7%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.180 issued rwts: total=2620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.180 filename1: (groupid=0, jobs=1): err= 0: pid=102647: Sat Dec 7 08:17:35 2024 00:26:24.180 read: IOPS=223, BW=894KiB/s (916kB/s)(8968KiB/10030msec) 00:26:24.180 slat (usec): min=4, max=8031, avg=26.04, stdev=307.65 00:26:24.180 clat (msec): min=32, max=139, avg=71.39, stdev=18.64 00:26:24.180 lat (msec): min=32, max=139, avg=71.42, stdev=18.63 00:26:24.180 clat percentiles (msec): 00:26:24.180 | 1.00th=[ 34], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 58], 00:26:24.180 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 74], 00:26:24.180 | 70.00th=[ 82], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 104], 00:26:24.180 | 99.00th=[ 126], 99.50th=[ 133], 99.90th=[ 133], 99.95th=[ 133], 00:26:24.180 | 99.99th=[ 140] 00:26:24.180 bw ( KiB/s): min= 640, max= 1144, per=3.86%, avg=890.70, stdev=149.70, samples=20 00:26:24.180 iops : min= 160, max= 286, avg=222.60, stdev=37.41, samples=20 00:26:24.180 lat (msec) : 50=11.02%, 100=82.83%, 250=6.16% 00:26:24.180 cpu : usr=34.33%, sys=0.54%, ctx=1173, majf=0, minf=9 00:26:24.180 IO depths : 1=2.1%, 2=5.1%, 4=14.8%, 8=66.9%, 16=11.2%, 32=0.0%, >=64=0.0% 00:26:24.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.180 complete : 0=0.0%, 4=91.5%, 8=3.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.180 issued rwts: total=2242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.180 filename1: (groupid=0, jobs=1): err= 0: pid=102648: Sat Dec 7 08:17:35 2024 00:26:24.180 read: IOPS=230, BW=921KiB/s (943kB/s)(9236KiB/10029msec) 00:26:24.180 slat (usec): min=4, max=4028, avg=14.79, stdev=83.96 00:26:24.180 clat (msec): min=32, max=130, avg=69.36, stdev=18.28 00:26:24.180 lat (msec): min=32, max=130, avg=69.37, stdev=18.28 00:26:24.180 clat percentiles (msec): 00:26:24.180 | 1.00th=[ 35], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 56], 00:26:24.180 | 30.00th=[ 60], 40.00th=[ 63], 50.00th=[ 67], 60.00th=[ 72], 00:26:24.180 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 105], 00:26:24.180 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 131], 99.95th=[ 131], 00:26:24.180 | 99.99th=[ 131] 00:26:24.180 bw ( KiB/s): min= 688, max= 1152, per=3.97%, avg=915.95, stdev=113.07, samples=20 00:26:24.180 iops : min= 172, max= 288, avg=228.95, stdev=28.32, samples=20 00:26:24.180 lat (msec) : 50=14.81%, 100=79.13%, 250=6.06% 00:26:24.180 cpu : usr=41.27%, sys=0.44%, ctx=1111, majf=0, minf=9 00:26:24.180 IO depths : 1=2.3%, 2=5.3%, 4=14.2%, 8=67.5%, 16=10.7%, 32=0.0%, >=64=0.0% 00:26:24.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.180 complete : 0=0.0%, 4=91.3%, 8=3.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.180 issued rwts: total=2309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.180 filename2: (groupid=0, jobs=1): err= 0: pid=102649: Sat Dec 7 08:17:35 2024 00:26:24.180 read: IOPS=241, BW=964KiB/s (988kB/s)(9692KiB/10049msec) 00:26:24.180 slat (usec): min=3, max=8031, avg=18.95, stdev=197.15 00:26:24.180 clat (msec): min=32, max=174, avg=66.11, stdev=19.38 00:26:24.180 lat (msec): min=32, max=174, avg=66.12, stdev=19.38 00:26:24.180 clat percentiles (msec): 00:26:24.180 | 1.00th=[ 34], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 51], 00:26:24.180 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 69], 00:26:24.180 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 93], 95.00th=[ 103], 00:26:24.180 | 99.00th=[ 120], 99.50th=[ 125], 99.90th=[ 176], 99.95th=[ 176], 00:26:24.180 | 99.99th=[ 176] 00:26:24.180 bw ( KiB/s): min= 640, max= 1296, per=4.17%, avg=962.75, stdev=181.41, samples=20 00:26:24.180 iops : min= 160, max= 324, avg=240.65, stdev=45.39, samples=20 00:26:24.180 lat (msec) : 50=19.23%, 100=75.44%, 250=5.32% 00:26:24.180 cpu : usr=38.92%, sys=0.54%, ctx=1144, majf=0, minf=9 00:26:24.180 IO depths : 1=1.7%, 2=3.6%, 4=11.7%, 8=71.5%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:24.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.180 complete : 0=0.0%, 4=90.2%, 8=4.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.180 issued rwts: total=2423,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.180 filename2: (groupid=0, jobs=1): err= 0: pid=102650: Sat Dec 7 08:17:35 2024 00:26:24.180 read: IOPS=247, BW=991KiB/s (1015kB/s)(9932KiB/10020msec) 00:26:24.180 slat (usec): min=4, max=8029, avg=17.68, stdev=227.50 00:26:24.180 clat (msec): min=24, max=169, avg=64.47, stdev=23.73 00:26:24.180 lat (msec): min=24, max=169, avg=64.49, stdev=23.73 00:26:24.180 clat percentiles (msec): 00:26:24.180 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 47], 00:26:24.180 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 63], 00:26:24.180 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 96], 95.00th=[ 120], 00:26:24.180 | 99.00th=[ 146], 99.50th=[ 148], 99.90th=[ 163], 99.95th=[ 163], 00:26:24.180 | 99.99th=[ 169] 00:26:24.180 bw ( KiB/s): min= 512, max= 1256, per=4.30%, avg=991.21, stdev=225.49, samples=19 00:26:24.180 iops : min= 128, max= 314, avg=247.79, stdev=56.39, samples=19 00:26:24.180 lat (msec) : 50=33.67%, 100=57.91%, 250=8.42% 00:26:24.180 cpu : usr=35.92%, sys=0.48%, ctx=960, majf=0, minf=9 00:26:24.180 IO depths : 1=0.8%, 2=1.7%, 4=7.5%, 8=76.6%, 16=13.4%, 32=0.0%, >=64=0.0% 00:26:24.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.181 complete : 0=0.0%, 4=89.5%, 8=6.5%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.181 issued rwts: total=2483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.181 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.181 filename2: (groupid=0, jobs=1): err= 0: pid=102651: Sat Dec 7 08:17:35 2024 00:26:24.181 read: IOPS=224, BW=899KiB/s (921kB/s)(9032KiB/10042msec) 00:26:24.181 slat (usec): min=4, max=12020, avg=31.24, stdev=386.33 00:26:24.181 clat (msec): min=27, max=140, avg=70.85, stdev=20.09 00:26:24.181 lat (msec): min=27, max=140, avg=70.88, stdev=20.08 00:26:24.181 clat percentiles (msec): 00:26:24.181 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 56], 00:26:24.181 | 30.00th=[ 60], 40.00th=[ 64], 50.00th=[ 67], 60.00th=[ 71], 00:26:24.181 | 70.00th=[ 79], 80.00th=[ 86], 90.00th=[ 101], 95.00th=[ 108], 00:26:24.181 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 142], 99.95th=[ 142], 00:26:24.181 | 99.99th=[ 142] 00:26:24.181 bw ( KiB/s): min= 640, max= 1072, per=3.88%, avg=896.45, stdev=160.60, samples=20 00:26:24.181 iops : min= 160, max= 268, avg=224.10, stdev=40.14, samples=20 00:26:24.181 lat (msec) : 50=12.00%, 100=78.12%, 250=9.88% 00:26:24.181 cpu : usr=42.47%, sys=0.55%, ctx=1211, majf=0, minf=9 00:26:24.181 IO depths : 1=2.2%, 2=5.0%, 4=14.2%, 8=67.7%, 16=10.9%, 32=0.0%, >=64=0.0% 00:26:24.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.181 complete : 0=0.0%, 4=91.0%, 8=4.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.181 issued rwts: total=2258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.181 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.181 filename2: (groupid=0, jobs=1): err= 0: pid=102652: Sat Dec 7 08:17:35 2024 00:26:24.181 read: IOPS=228, BW=914KiB/s (936kB/s)(9160KiB/10026msec) 00:26:24.181 slat (usec): min=4, max=7030, avg=19.39, stdev=188.66 00:26:24.181 clat (msec): min=31, max=141, avg=69.94, stdev=18.61 00:26:24.181 lat (msec): min=31, max=141, avg=69.96, stdev=18.62 00:26:24.181 clat percentiles (msec): 00:26:24.181 | 1.00th=[ 35], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 55], 00:26:24.181 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 72], 00:26:24.181 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 101], 00:26:24.181 | 99.00th=[ 122], 99.50th=[ 126], 99.90th=[ 142], 99.95th=[ 142], 00:26:24.181 | 99.99th=[ 142] 00:26:24.181 bw ( KiB/s): min= 768, max= 1200, per=3.94%, avg=908.90, stdev=116.17, samples=20 00:26:24.181 iops : min= 192, max= 300, avg=227.15, stdev=29.03, samples=20 00:26:24.181 lat (msec) : 50=15.24%, 100=79.30%, 250=5.46% 00:26:24.181 cpu : usr=35.02%, sys=0.30%, ctx=1015, majf=0, minf=9 00:26:24.181 IO depths : 1=1.5%, 2=3.8%, 4=12.4%, 8=70.5%, 16=11.9%, 32=0.0%, >=64=0.0% 00:26:24.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.181 complete : 0=0.0%, 4=90.9%, 8=4.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.181 issued rwts: total=2290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.181 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.181 filename2: (groupid=0, jobs=1): err= 0: pid=102653: Sat Dec 7 08:17:35 2024 00:26:24.181 read: IOPS=254, BW=1017KiB/s (1041kB/s)(9.99MiB/10061msec) 00:26:24.181 slat (usec): min=4, max=8032, avg=21.53, stdev=263.21 00:26:24.181 clat (msec): min=25, max=138, avg=62.80, stdev=18.93 00:26:24.181 lat (msec): min=25, max=138, avg=62.82, stdev=18.94 00:26:24.181 clat percentiles (msec): 00:26:24.181 | 1.00th=[ 34], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 47], 00:26:24.181 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 67], 00:26:24.181 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 90], 95.00th=[ 99], 00:26:24.181 | 99.00th=[ 111], 99.50th=[ 116], 99.90th=[ 140], 99.95th=[ 140], 00:26:24.181 | 99.99th=[ 140] 00:26:24.181 bw ( KiB/s): min= 736, max= 1304, per=4.38%, avg=1011.15, stdev=147.37, samples=20 00:26:24.181 iops : min= 184, max= 326, avg=252.75, stdev=36.78, samples=20 00:26:24.181 lat (msec) : 50=31.95%, 100=63.39%, 250=4.65% 00:26:24.181 cpu : usr=34.37%, sys=0.57%, ctx=1173, majf=0, minf=9 00:26:24.181 IO depths : 1=0.8%, 2=2.2%, 4=10.1%, 8=74.4%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:24.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.181 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.181 issued rwts: total=2557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.181 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.181 filename2: (groupid=0, jobs=1): err= 0: pid=102654: Sat Dec 7 08:17:35 2024 00:26:24.181 read: IOPS=220, BW=881KiB/s (902kB/s)(8824KiB/10013msec) 00:26:24.181 slat (usec): min=4, max=8032, avg=22.92, stdev=295.59 00:26:24.181 clat (msec): min=31, max=160, avg=72.50, stdev=21.34 00:26:24.181 lat (msec): min=31, max=160, avg=72.53, stdev=21.34 00:26:24.181 clat percentiles (msec): 00:26:24.181 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 60], 00:26:24.181 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 72], 00:26:24.181 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 109], 00:26:24.181 | 99.00th=[ 144], 99.50th=[ 161], 99.90th=[ 161], 99.95th=[ 161], 00:26:24.181 | 99.99th=[ 161] 00:26:24.181 bw ( KiB/s): min= 584, max= 1176, per=3.77%, avg=869.47, stdev=160.75, samples=19 00:26:24.181 iops : min= 146, max= 294, avg=217.37, stdev=40.19, samples=19 00:26:24.181 lat (msec) : 50=12.24%, 100=79.19%, 250=8.57% 00:26:24.181 cpu : usr=32.67%, sys=0.38%, ctx=873, majf=0, minf=9 00:26:24.181 IO depths : 1=1.5%, 2=3.6%, 4=11.7%, 8=70.8%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:24.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.181 complete : 0=0.0%, 4=90.9%, 8=4.8%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.181 issued rwts: total=2206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.181 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.181 filename2: (groupid=0, jobs=1): err= 0: pid=102655: Sat Dec 7 08:17:35 2024 00:26:24.181 read: IOPS=267, BW=1068KiB/s (1094kB/s)(10.4MiB/10002msec) 00:26:24.181 slat (usec): min=5, max=8025, avg=17.71, stdev=190.06 00:26:24.181 clat (msec): min=6, max=167, avg=59.81, stdev=21.15 00:26:24.181 lat (msec): min=6, max=167, avg=59.83, stdev=21.15 00:26:24.181 clat percentiles (msec): 00:26:24.181 | 1.00th=[ 10], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 43], 00:26:24.181 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 59], 60.00th=[ 62], 00:26:24.181 | 70.00th=[ 70], 80.00th=[ 74], 90.00th=[ 86], 95.00th=[ 96], 00:26:24.181 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 167], 99.95th=[ 167], 00:26:24.181 | 99.99th=[ 167] 00:26:24.181 bw ( KiB/s): min= 640, max= 1504, per=4.61%, avg=1064.00, stdev=240.87, samples=19 00:26:24.181 iops : min= 160, max= 376, avg=266.00, stdev=60.22, samples=19 00:26:24.181 lat (msec) : 10=1.12%, 20=0.60%, 50=36.84%, 100=56.87%, 250=4.57% 00:26:24.181 cpu : usr=38.69%, sys=0.63%, ctx=1146, majf=0, minf=9 00:26:24.181 IO depths : 1=0.6%, 2=1.7%, 4=8.3%, 8=76.2%, 16=13.2%, 32=0.0%, >=64=0.0% 00:26:24.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.181 complete : 0=0.0%, 4=89.8%, 8=5.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.181 issued rwts: total=2671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.181 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.181 filename2: (groupid=0, jobs=1): err= 0: pid=102656: Sat Dec 7 08:17:35 2024 00:26:24.181 read: IOPS=218, BW=874KiB/s (895kB/s)(8752KiB/10015msec) 00:26:24.181 slat (usec): min=4, max=4028, avg=18.57, stdev=148.46 00:26:24.181 clat (msec): min=34, max=141, avg=73.13, stdev=20.21 00:26:24.181 lat (msec): min=34, max=141, avg=73.15, stdev=20.21 00:26:24.181 clat percentiles (msec): 00:26:24.181 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 55], 20.00th=[ 59], 00:26:24.181 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 68], 60.00th=[ 71], 00:26:24.181 | 70.00th=[ 81], 80.00th=[ 90], 90.00th=[ 101], 95.00th=[ 111], 00:26:24.181 | 99.00th=[ 140], 99.50th=[ 140], 99.90th=[ 142], 99.95th=[ 142], 00:26:24.181 | 99.99th=[ 142] 00:26:24.181 bw ( KiB/s): min= 638, max= 1200, per=3.76%, avg=868.35, stdev=144.02, samples=20 00:26:24.181 iops : min= 159, max= 300, avg=217.05, stdev=36.05, samples=20 00:26:24.181 lat (msec) : 50=8.18%, 100=81.58%, 250=10.24% 00:26:24.181 cpu : usr=45.48%, sys=0.61%, ctx=1230, majf=0, minf=9 00:26:24.181 IO depths : 1=2.6%, 2=6.2%, 4=16.9%, 8=63.5%, 16=10.8%, 32=0.0%, >=64=0.0% 00:26:24.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.181 complete : 0=0.0%, 4=91.8%, 8=3.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.181 issued rwts: total=2188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.181 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:24.181 00:26:24.181 Run status group 0 (all jobs): 00:26:24.181 READ: bw=22.5MiB/s (23.6MB/s), 863KiB/s-1145KiB/s (884kB/s-1172kB/s), io=227MiB (238MB), run=10002-10077msec 00:26:24.440 08:17:35 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:24.440 08:17:35 -- target/dif.sh@43 -- # local sub 00:26:24.440 08:17:35 -- target/dif.sh@45 -- # for sub in "$@" 00:26:24.440 08:17:35 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:24.440 08:17:35 -- target/dif.sh@36 -- # local sub_id=0 00:26:24.440 08:17:35 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:24.440 08:17:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.440 08:17:35 -- common/autotest_common.sh@10 -- # set +x 00:26:24.440 08:17:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.440 08:17:35 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:24.440 08:17:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.440 08:17:35 -- common/autotest_common.sh@10 -- # set +x 00:26:24.440 08:17:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.440 08:17:35 -- target/dif.sh@45 -- # for sub in "$@" 00:26:24.440 08:17:35 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:24.440 08:17:35 -- target/dif.sh@36 -- # local sub_id=1 00:26:24.440 08:17:35 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:24.440 08:17:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.440 08:17:35 -- common/autotest_common.sh@10 -- # set +x 00:26:24.440 08:17:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.440 08:17:35 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:24.440 08:17:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.440 08:17:35 -- common/autotest_common.sh@10 -- # set +x 00:26:24.440 08:17:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.440 08:17:35 -- target/dif.sh@45 -- # for sub in "$@" 00:26:24.440 08:17:35 -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:24.440 08:17:35 -- target/dif.sh@36 -- # local sub_id=2 00:26:24.440 08:17:35 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:24.440 08:17:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.440 08:17:35 -- common/autotest_common.sh@10 -- # set +x 00:26:24.440 08:17:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.440 08:17:35 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:24.440 08:17:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.440 08:17:35 -- common/autotest_common.sh@10 -- # set +x 00:26:24.440 08:17:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.440 08:17:35 -- target/dif.sh@115 -- # NULL_DIF=1 00:26:24.440 08:17:35 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:24.440 08:17:35 -- target/dif.sh@115 -- # numjobs=2 00:26:24.440 08:17:35 -- target/dif.sh@115 -- # iodepth=8 00:26:24.440 08:17:35 -- target/dif.sh@115 -- # runtime=5 00:26:24.441 08:17:35 -- target/dif.sh@115 -- # files=1 00:26:24.441 08:17:35 -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:24.441 08:17:35 -- target/dif.sh@28 -- # local sub 00:26:24.441 08:17:35 -- target/dif.sh@30 -- # for sub in "$@" 00:26:24.441 08:17:35 -- target/dif.sh@31 -- # create_subsystem 0 00:26:24.441 08:17:35 -- target/dif.sh@18 -- # local sub_id=0 00:26:24.441 08:17:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:24.441 08:17:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.441 08:17:35 -- common/autotest_common.sh@10 -- # set +x 00:26:24.441 bdev_null0 00:26:24.441 08:17:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.441 08:17:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:24.441 08:17:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.441 08:17:35 -- common/autotest_common.sh@10 -- # set +x 00:26:24.441 08:17:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.441 08:17:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:24.441 08:17:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.441 08:17:35 -- common/autotest_common.sh@10 -- # set +x 00:26:24.441 08:17:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.441 08:17:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:24.441 08:17:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.441 08:17:35 -- common/autotest_common.sh@10 -- # set +x 00:26:24.441 [2024-12-07 08:17:35.690215] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.441 08:17:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.441 08:17:35 -- target/dif.sh@30 -- # for sub in "$@" 00:26:24.441 08:17:35 -- target/dif.sh@31 -- # create_subsystem 1 00:26:24.441 08:17:35 -- target/dif.sh@18 -- # local sub_id=1 00:26:24.441 08:17:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:24.441 08:17:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.441 08:17:35 -- common/autotest_common.sh@10 -- # set +x 00:26:24.441 bdev_null1 00:26:24.441 08:17:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.441 08:17:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:24.441 08:17:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.441 08:17:35 -- common/autotest_common.sh@10 -- # set +x 00:26:24.441 08:17:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.441 08:17:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:24.441 08:17:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.441 08:17:35 -- common/autotest_common.sh@10 -- # set +x 00:26:24.700 08:17:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.700 08:17:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:24.700 08:17:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.700 08:17:35 -- common/autotest_common.sh@10 -- # set +x 00:26:24.700 08:17:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.700 08:17:35 -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:24.700 08:17:35 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:24.700 08:17:35 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:24.700 08:17:35 -- nvmf/common.sh@520 -- # config=() 00:26:24.700 08:17:35 -- nvmf/common.sh@520 -- # local subsystem config 00:26:24.700 08:17:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:24.700 08:17:35 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:24.700 08:17:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:24.700 { 00:26:24.700 "params": { 00:26:24.700 "name": "Nvme$subsystem", 00:26:24.700 "trtype": "$TEST_TRANSPORT", 00:26:24.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.700 "adrfam": "ipv4", 00:26:24.700 "trsvcid": "$NVMF_PORT", 00:26:24.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.700 "hdgst": ${hdgst:-false}, 00:26:24.700 "ddgst": ${ddgst:-false} 00:26:24.700 }, 00:26:24.700 "method": "bdev_nvme_attach_controller" 00:26:24.700 } 00:26:24.700 EOF 00:26:24.700 )") 00:26:24.700 08:17:35 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:24.700 08:17:35 -- target/dif.sh@82 -- # gen_fio_conf 00:26:24.700 08:17:35 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:24.700 08:17:35 -- target/dif.sh@54 -- # local file 00:26:24.700 08:17:35 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:24.700 08:17:35 -- target/dif.sh@56 -- # cat 00:26:24.700 08:17:35 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:24.700 08:17:35 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:24.700 08:17:35 -- common/autotest_common.sh@1330 -- # shift 00:26:24.700 08:17:35 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:24.700 08:17:35 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:24.700 08:17:35 -- nvmf/common.sh@542 -- # cat 00:26:24.700 08:17:35 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:24.700 08:17:35 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:24.700 08:17:35 -- target/dif.sh@72 -- # (( file <= files )) 00:26:24.700 08:17:35 -- target/dif.sh@73 -- # cat 00:26:24.700 08:17:35 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:24.700 08:17:35 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:24.700 08:17:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:24.700 08:17:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:24.700 { 00:26:24.700 "params": { 00:26:24.700 "name": "Nvme$subsystem", 00:26:24.700 "trtype": "$TEST_TRANSPORT", 00:26:24.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:24.700 "adrfam": "ipv4", 00:26:24.700 "trsvcid": "$NVMF_PORT", 00:26:24.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:24.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:24.700 "hdgst": ${hdgst:-false}, 00:26:24.700 "ddgst": ${ddgst:-false} 00:26:24.700 }, 00:26:24.700 "method": "bdev_nvme_attach_controller" 00:26:24.700 } 00:26:24.700 EOF 00:26:24.700 )") 00:26:24.700 08:17:35 -- target/dif.sh@72 -- # (( file++ )) 00:26:24.700 08:17:35 -- target/dif.sh@72 -- # (( file <= files )) 00:26:24.700 08:17:35 -- nvmf/common.sh@542 -- # cat 00:26:24.700 08:17:35 -- nvmf/common.sh@544 -- # jq . 00:26:24.700 08:17:35 -- nvmf/common.sh@545 -- # IFS=, 00:26:24.700 08:17:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:24.700 "params": { 00:26:24.700 "name": "Nvme0", 00:26:24.700 "trtype": "tcp", 00:26:24.700 "traddr": "10.0.0.2", 00:26:24.700 "adrfam": "ipv4", 00:26:24.700 "trsvcid": "4420", 00:26:24.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:24.700 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:24.700 "hdgst": false, 00:26:24.700 "ddgst": false 00:26:24.700 }, 00:26:24.700 "method": "bdev_nvme_attach_controller" 00:26:24.700 },{ 00:26:24.700 "params": { 00:26:24.700 "name": "Nvme1", 00:26:24.700 "trtype": "tcp", 00:26:24.700 "traddr": "10.0.0.2", 00:26:24.700 "adrfam": "ipv4", 00:26:24.700 "trsvcid": "4420", 00:26:24.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:24.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:24.700 "hdgst": false, 00:26:24.700 "ddgst": false 00:26:24.700 }, 00:26:24.700 "method": "bdev_nvme_attach_controller" 00:26:24.700 }' 00:26:24.700 08:17:35 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:24.700 08:17:35 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:24.700 08:17:35 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:24.700 08:17:35 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:24.700 08:17:35 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:24.700 08:17:35 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:24.700 08:17:35 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:24.700 08:17:35 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:24.700 08:17:35 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:24.700 08:17:35 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:24.700 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:24.700 ... 00:26:24.700 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:24.700 ... 00:26:24.700 fio-3.35 00:26:24.700 Starting 4 threads 00:26:25.265 [2024-12-07 08:17:36.478091] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:25.265 [2024-12-07 08:17:36.478166] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:30.534 00:26:30.535 filename0: (groupid=0, jobs=1): err= 0: pid=102788: Sat Dec 7 08:17:41 2024 00:26:30.535 read: IOPS=2248, BW=17.6MiB/s (18.4MB/s)(87.8MiB/5001msec) 00:26:30.535 slat (usec): min=6, max=101, avg=21.06, stdev=10.31 00:26:30.535 clat (usec): min=919, max=6049, avg=3464.98, stdev=264.09 00:26:30.535 lat (usec): min=926, max=6057, avg=3486.03, stdev=264.58 00:26:30.535 clat percentiles (usec): 00:26:30.535 | 1.00th=[ 2900], 5.00th=[ 3195], 10.00th=[ 3261], 20.00th=[ 3294], 00:26:30.535 | 30.00th=[ 3359], 40.00th=[ 3392], 50.00th=[ 3425], 60.00th=[ 3458], 00:26:30.535 | 70.00th=[ 3523], 80.00th=[ 3589], 90.00th=[ 3752], 95.00th=[ 3916], 00:26:30.535 | 99.00th=[ 4359], 99.50th=[ 4686], 99.90th=[ 5538], 99.95th=[ 5735], 00:26:30.535 | 99.99th=[ 5866] 00:26:30.535 bw ( KiB/s): min=17664, max=18688, per=24.90%, avg=17968.00, stdev=342.88, samples=9 00:26:30.535 iops : min= 2208, max= 2336, avg=2246.00, stdev=42.86, samples=9 00:26:30.535 lat (usec) : 1000=0.03% 00:26:30.535 lat (msec) : 2=0.06%, 4=96.29%, 10=3.62% 00:26:30.535 cpu : usr=94.84%, sys=3.72%, ctx=10, majf=0, minf=9 00:26:30.535 IO depths : 1=8.7%, 2=19.7%, 4=55.1%, 8=16.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:30.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.535 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.535 issued rwts: total=11243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.535 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:30.535 filename0: (groupid=0, jobs=1): err= 0: pid=102789: Sat Dec 7 08:17:41 2024 00:26:30.535 read: IOPS=2251, BW=17.6MiB/s (18.4MB/s)(88.0MiB/5002msec) 00:26:30.535 slat (usec): min=5, max=102, avg=20.70, stdev=12.21 00:26:30.535 clat (usec): min=552, max=6392, avg=3456.58, stdev=241.71 00:26:30.535 lat (usec): min=559, max=6418, avg=3477.28, stdev=241.61 00:26:30.535 clat percentiles (usec): 00:26:30.535 | 1.00th=[ 3097], 5.00th=[ 3195], 10.00th=[ 3261], 20.00th=[ 3294], 00:26:30.535 | 30.00th=[ 3359], 40.00th=[ 3392], 50.00th=[ 3425], 60.00th=[ 3458], 00:26:30.535 | 70.00th=[ 3523], 80.00th=[ 3589], 90.00th=[ 3720], 95.00th=[ 3851], 00:26:30.535 | 99.00th=[ 4178], 99.50th=[ 4359], 99.90th=[ 5342], 99.95th=[ 6194], 00:26:30.535 | 99.99th=[ 6259] 00:26:30.535 bw ( KiB/s): min=17747, max=18736, per=24.95%, avg=18005.67, stdev=305.05, samples=9 00:26:30.535 iops : min= 2218, max= 2342, avg=2250.67, stdev=38.17, samples=9 00:26:30.535 lat (usec) : 750=0.03%, 1000=0.05% 00:26:30.535 lat (msec) : 2=0.03%, 4=97.51%, 10=2.39% 00:26:30.535 cpu : usr=95.02%, sys=3.60%, ctx=8, majf=0, minf=0 00:26:30.535 IO depths : 1=10.0%, 2=23.5%, 4=51.4%, 8=15.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:30.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.535 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.535 issued rwts: total=11264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.535 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:30.535 filename1: (groupid=0, jobs=1): err= 0: pid=102790: Sat Dec 7 08:17:41 2024 00:26:30.535 read: IOPS=2269, BW=17.7MiB/s (18.6MB/s)(88.7MiB/5003msec) 00:26:30.535 slat (nsec): min=5351, max=74038, avg=10476.35, stdev=7017.88 00:26:30.535 clat (usec): min=897, max=4775, avg=3478.44, stdev=290.74 00:26:30.535 lat (usec): min=904, max=4796, avg=3488.92, stdev=290.85 00:26:30.535 clat percentiles (usec): 00:26:30.535 | 1.00th=[ 2638], 5.00th=[ 3261], 10.00th=[ 3294], 20.00th=[ 3359], 00:26:30.535 | 30.00th=[ 3392], 40.00th=[ 3425], 50.00th=[ 3458], 60.00th=[ 3490], 00:26:30.535 | 70.00th=[ 3556], 80.00th=[ 3621], 90.00th=[ 3752], 95.00th=[ 3884], 00:26:30.535 | 99.00th=[ 4113], 99.50th=[ 4228], 99.90th=[ 4424], 99.95th=[ 4490], 00:26:30.535 | 99.99th=[ 4752] 00:26:30.535 bw ( KiB/s): min=17792, max=18688, per=25.15%, avg=18149.33, stdev=286.89, samples=9 00:26:30.535 iops : min= 2224, max= 2336, avg=2268.67, stdev=35.86, samples=9 00:26:30.535 lat (usec) : 1000=0.28% 00:26:30.535 lat (msec) : 2=0.56%, 4=96.95%, 10=2.20% 00:26:30.535 cpu : usr=96.00%, sys=2.84%, ctx=23, majf=0, minf=0 00:26:30.535 IO depths : 1=7.0%, 2=18.0%, 4=56.6%, 8=18.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:30.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.535 complete : 0=0.0%, 4=89.8%, 8=10.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.535 issued rwts: total=11356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.535 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:30.535 filename1: (groupid=0, jobs=1): err= 0: pid=102791: Sat Dec 7 08:17:41 2024 00:26:30.535 read: IOPS=2252, BW=17.6MiB/s (18.4MB/s)(88.0MiB/5003msec) 00:26:30.535 slat (usec): min=5, max=100, avg=22.33, stdev=11.23 00:26:30.535 clat (usec): min=1157, max=6688, avg=3447.96, stdev=230.53 00:26:30.535 lat (usec): min=1164, max=6711, avg=3470.28, stdev=231.09 00:26:30.535 clat percentiles (usec): 00:26:30.535 | 1.00th=[ 3097], 5.00th=[ 3228], 10.00th=[ 3261], 20.00th=[ 3294], 00:26:30.535 | 30.00th=[ 3326], 40.00th=[ 3359], 50.00th=[ 3425], 60.00th=[ 3458], 00:26:30.535 | 70.00th=[ 3490], 80.00th=[ 3556], 90.00th=[ 3720], 95.00th=[ 3851], 00:26:30.535 | 99.00th=[ 4146], 99.50th=[ 4359], 99.90th=[ 5276], 99.95th=[ 5538], 00:26:30.535 | 99.99th=[ 6194] 00:26:30.535 bw ( KiB/s): min=17776, max=18560, per=24.94%, avg=17996.44, stdev=248.74, samples=9 00:26:30.535 iops : min= 2222, max= 2320, avg=2249.56, stdev=31.09, samples=9 00:26:30.535 lat (msec) : 2=0.12%, 4=97.67%, 10=2.21% 00:26:30.535 cpu : usr=95.46%, sys=3.14%, ctx=6, majf=0, minf=0 00:26:30.535 IO depths : 1=9.4%, 2=24.9%, 4=50.1%, 8=15.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:30.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.535 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.535 issued rwts: total=11267,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.535 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:30.535 00:26:30.535 Run status group 0 (all jobs): 00:26:30.535 READ: bw=70.5MiB/s (73.9MB/s), 17.6MiB/s-17.7MiB/s (18.4MB/s-18.6MB/s), io=353MiB (370MB), run=5001-5003msec 00:26:30.794 08:17:41 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:30.794 08:17:41 -- target/dif.sh@43 -- # local sub 00:26:30.794 08:17:41 -- target/dif.sh@45 -- # for sub in "$@" 00:26:30.794 08:17:41 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:30.794 08:17:41 -- target/dif.sh@36 -- # local sub_id=0 00:26:30.794 08:17:41 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:30.794 08:17:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.794 08:17:41 -- common/autotest_common.sh@10 -- # set +x 00:26:30.794 08:17:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.794 08:17:41 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:30.794 08:17:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.794 08:17:41 -- common/autotest_common.sh@10 -- # set +x 00:26:30.794 08:17:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.794 08:17:41 -- target/dif.sh@45 -- # for sub in "$@" 00:26:30.795 08:17:41 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:30.795 08:17:41 -- target/dif.sh@36 -- # local sub_id=1 00:26:30.795 08:17:41 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:30.795 08:17:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.795 08:17:41 -- common/autotest_common.sh@10 -- # set +x 00:26:30.795 08:17:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.795 08:17:41 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:30.795 08:17:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.795 08:17:41 -- common/autotest_common.sh@10 -- # set +x 00:26:30.795 08:17:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.795 00:26:30.795 real 0m23.706s 00:26:30.795 user 2m8.259s 00:26:30.795 sys 0m3.519s 00:26:30.795 08:17:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:30.795 08:17:41 -- common/autotest_common.sh@10 -- # set +x 00:26:30.795 ************************************ 00:26:30.795 END TEST fio_dif_rand_params 00:26:30.795 ************************************ 00:26:30.795 08:17:41 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:30.795 08:17:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:30.795 08:17:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:30.795 08:17:41 -- common/autotest_common.sh@10 -- # set +x 00:26:30.795 ************************************ 00:26:30.795 START TEST fio_dif_digest 00:26:30.795 ************************************ 00:26:30.795 08:17:41 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:26:30.795 08:17:41 -- target/dif.sh@123 -- # local NULL_DIF 00:26:30.795 08:17:41 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:30.795 08:17:41 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:30.795 08:17:41 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:30.795 08:17:41 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:30.795 08:17:41 -- target/dif.sh@127 -- # numjobs=3 00:26:30.795 08:17:41 -- target/dif.sh@127 -- # iodepth=3 00:26:30.795 08:17:41 -- target/dif.sh@127 -- # runtime=10 00:26:30.795 08:17:41 -- target/dif.sh@128 -- # hdgst=true 00:26:30.795 08:17:41 -- target/dif.sh@128 -- # ddgst=true 00:26:30.795 08:17:41 -- target/dif.sh@130 -- # create_subsystems 0 00:26:30.795 08:17:41 -- target/dif.sh@28 -- # local sub 00:26:30.795 08:17:41 -- target/dif.sh@30 -- # for sub in "$@" 00:26:30.795 08:17:41 -- target/dif.sh@31 -- # create_subsystem 0 00:26:30.795 08:17:41 -- target/dif.sh@18 -- # local sub_id=0 00:26:30.795 08:17:41 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:30.795 08:17:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.795 08:17:41 -- common/autotest_common.sh@10 -- # set +x 00:26:30.795 bdev_null0 00:26:30.795 08:17:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.795 08:17:41 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:30.795 08:17:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.795 08:17:41 -- common/autotest_common.sh@10 -- # set +x 00:26:30.795 08:17:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.795 08:17:41 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:30.795 08:17:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.795 08:17:41 -- common/autotest_common.sh@10 -- # set +x 00:26:30.795 08:17:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.795 08:17:42 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:30.795 08:17:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.795 08:17:42 -- common/autotest_common.sh@10 -- # set +x 00:26:30.795 [2024-12-07 08:17:42.006236] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:30.795 08:17:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.795 08:17:42 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:30.795 08:17:42 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:30.795 08:17:42 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:30.795 08:17:42 -- nvmf/common.sh@520 -- # config=() 00:26:30.795 08:17:42 -- nvmf/common.sh@520 -- # local subsystem config 00:26:30.795 08:17:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:30.795 08:17:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:30.795 { 00:26:30.795 "params": { 00:26:30.795 "name": "Nvme$subsystem", 00:26:30.795 "trtype": "$TEST_TRANSPORT", 00:26:30.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.795 "adrfam": "ipv4", 00:26:30.795 "trsvcid": "$NVMF_PORT", 00:26:30.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.795 "hdgst": ${hdgst:-false}, 00:26:30.795 "ddgst": ${ddgst:-false} 00:26:30.795 }, 00:26:30.795 "method": "bdev_nvme_attach_controller" 00:26:30.795 } 00:26:30.795 EOF 00:26:30.795 )") 00:26:30.795 08:17:42 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:30.795 08:17:42 -- target/dif.sh@82 -- # gen_fio_conf 00:26:30.795 08:17:42 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:30.795 08:17:42 -- target/dif.sh@54 -- # local file 00:26:30.795 08:17:42 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:30.795 08:17:42 -- target/dif.sh@56 -- # cat 00:26:30.795 08:17:42 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:30.795 08:17:42 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:30.795 08:17:42 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:30.795 08:17:42 -- common/autotest_common.sh@1330 -- # shift 00:26:30.795 08:17:42 -- nvmf/common.sh@542 -- # cat 00:26:30.795 08:17:42 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:30.795 08:17:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:30.795 08:17:42 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:30.795 08:17:42 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:30.795 08:17:42 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:30.795 08:17:42 -- target/dif.sh@72 -- # (( file <= files )) 00:26:30.795 08:17:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:30.795 08:17:42 -- nvmf/common.sh@544 -- # jq . 00:26:30.795 08:17:42 -- nvmf/common.sh@545 -- # IFS=, 00:26:30.795 08:17:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:30.795 "params": { 00:26:30.795 "name": "Nvme0", 00:26:30.795 "trtype": "tcp", 00:26:30.795 "traddr": "10.0.0.2", 00:26:30.795 "adrfam": "ipv4", 00:26:30.795 "trsvcid": "4420", 00:26:30.795 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:30.795 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:30.795 "hdgst": true, 00:26:30.795 "ddgst": true 00:26:30.795 }, 00:26:30.795 "method": "bdev_nvme_attach_controller" 00:26:30.795 }' 00:26:30.795 08:17:42 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:30.795 08:17:42 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:30.795 08:17:42 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:30.795 08:17:42 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:30.795 08:17:42 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:30.795 08:17:42 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:31.055 08:17:42 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:31.055 08:17:42 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:31.055 08:17:42 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:31.055 08:17:42 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:31.055 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:31.055 ... 00:26:31.055 fio-3.35 00:26:31.055 Starting 3 threads 00:26:31.623 [2024-12-07 08:17:42.631310] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:31.623 [2024-12-07 08:17:42.631380] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:41.611 00:26:41.611 filename0: (groupid=0, jobs=1): err= 0: pid=102897: Sat Dec 7 08:17:52 2024 00:26:41.611 read: IOPS=238, BW=29.8MiB/s (31.3MB/s)(299MiB/10004msec) 00:26:41.611 slat (nsec): min=5586, max=75088, avg=14919.04, stdev=6136.58 00:26:41.611 clat (usec): min=7838, max=53223, avg=12542.97, stdev=8997.27 00:26:41.611 lat (usec): min=7849, max=53243, avg=12557.89, stdev=8997.20 00:26:41.611 clat percentiles (usec): 00:26:41.611 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:26:41.611 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:26:41.611 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11731], 95.00th=[49021], 00:26:41.611 | 99.00th=[51643], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:26:41.611 | 99.99th=[53216] 00:26:41.611 bw ( KiB/s): min=24064, max=35328, per=32.72%, avg=30730.47, stdev=3548.79, samples=19 00:26:41.611 iops : min= 188, max= 276, avg=240.05, stdev=27.75, samples=19 00:26:41.611 lat (msec) : 10=30.14%, 20=64.46%, 50=1.13%, 100=4.27% 00:26:41.611 cpu : usr=93.98%, sys=4.38%, ctx=201, majf=0, minf=9 00:26:41.611 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:41.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.611 issued rwts: total=2389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.611 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:41.611 filename0: (groupid=0, jobs=1): err= 0: pid=102898: Sat Dec 7 08:17:52 2024 00:26:41.611 read: IOPS=269, BW=33.7MiB/s (35.3MB/s)(337MiB/10005msec) 00:26:41.611 slat (nsec): min=6342, max=78728, avg=16416.26, stdev=6654.95 00:26:41.611 clat (usec): min=5940, max=22202, avg=11108.41, stdev=2262.98 00:26:41.611 lat (usec): min=5966, max=22223, avg=11124.82, stdev=2262.99 00:26:41.611 clat percentiles (usec): 00:26:41.611 | 1.00th=[ 6718], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 8356], 00:26:41.611 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11600], 60.00th=[11994], 00:26:41.611 | 70.00th=[12256], 80.00th=[12780], 90.00th=[13304], 95.00th=[13829], 00:26:41.611 | 99.00th=[17433], 99.50th=[18744], 99.90th=[20841], 99.95th=[21365], 00:26:41.611 | 99.99th=[22152] 00:26:41.611 bw ( KiB/s): min=25600, max=38912, per=36.74%, avg=34506.11, stdev=2752.68, samples=19 00:26:41.611 iops : min= 200, max= 304, avg=269.58, stdev=21.51, samples=19 00:26:41.611 lat (msec) : 10=24.99%, 20=74.90%, 50=0.11% 00:26:41.611 cpu : usr=93.78%, sys=4.51%, ctx=7, majf=0, minf=9 00:26:41.611 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:41.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.611 issued rwts: total=2697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.611 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:41.611 filename0: (groupid=0, jobs=1): err= 0: pid=102899: Sat Dec 7 08:17:52 2024 00:26:41.611 read: IOPS=225, BW=28.2MiB/s (29.5MB/s)(282MiB/10004msec) 00:26:41.611 slat (nsec): min=7000, max=60545, avg=17353.45, stdev=5833.54 00:26:41.611 clat (usec): min=6663, max=26203, avg=13288.89, stdev=2697.87 00:26:41.611 lat (usec): min=6682, max=26214, avg=13306.24, stdev=2697.89 00:26:41.611 clat percentiles (usec): 00:26:41.611 | 1.00th=[ 8160], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9896], 00:26:41.611 | 30.00th=[13304], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:26:41.611 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15533], 95.00th=[16057], 00:26:41.611 | 99.00th=[22152], 99.50th=[23462], 99.90th=[24773], 99.95th=[25035], 00:26:41.611 | 99.99th=[26084] 00:26:41.611 bw ( KiB/s): min=22316, max=30976, per=30.63%, avg=28768.63, stdev=1905.94, samples=19 00:26:41.611 iops : min= 174, max= 242, avg=224.74, stdev=14.95, samples=19 00:26:41.611 lat (msec) : 10=20.62%, 20=78.18%, 50=1.20% 00:26:41.611 cpu : usr=95.04%, sys=3.57%, ctx=10, majf=0, minf=9 00:26:41.611 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:41.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.611 issued rwts: total=2255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.611 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:41.611 00:26:41.611 Run status group 0 (all jobs): 00:26:41.611 READ: bw=91.7MiB/s (96.2MB/s), 28.2MiB/s-33.7MiB/s (29.5MB/s-35.3MB/s), io=918MiB (962MB), run=10004-10005msec 00:26:41.870 08:17:52 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:41.870 08:17:52 -- target/dif.sh@43 -- # local sub 00:26:41.870 08:17:52 -- target/dif.sh@45 -- # for sub in "$@" 00:26:41.870 08:17:52 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:41.870 08:17:52 -- target/dif.sh@36 -- # local sub_id=0 00:26:41.870 08:17:52 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:41.870 08:17:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.870 08:17:52 -- common/autotest_common.sh@10 -- # set +x 00:26:41.871 08:17:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.871 08:17:52 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:41.871 08:17:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.871 08:17:52 -- common/autotest_common.sh@10 -- # set +x 00:26:41.871 08:17:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.871 00:26:41.871 real 0m11.018s 00:26:41.871 user 0m28.920s 00:26:41.871 sys 0m1.538s 00:26:41.871 08:17:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:41.871 08:17:52 -- common/autotest_common.sh@10 -- # set +x 00:26:41.871 ************************************ 00:26:41.871 END TEST fio_dif_digest 00:26:41.871 ************************************ 00:26:41.871 08:17:53 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:41.871 08:17:53 -- target/dif.sh@147 -- # nvmftestfini 00:26:41.871 08:17:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:41.871 08:17:53 -- nvmf/common.sh@116 -- # sync 00:26:41.871 08:17:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:41.871 08:17:53 -- nvmf/common.sh@119 -- # set +e 00:26:41.871 08:17:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:41.871 08:17:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:41.871 rmmod nvme_tcp 00:26:41.871 rmmod nvme_fabrics 00:26:41.871 rmmod nvme_keyring 00:26:41.871 08:17:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:41.871 08:17:53 -- nvmf/common.sh@123 -- # set -e 00:26:41.871 08:17:53 -- nvmf/common.sh@124 -- # return 0 00:26:41.871 08:17:53 -- nvmf/common.sh@477 -- # '[' -n 102126 ']' 00:26:41.871 08:17:53 -- nvmf/common.sh@478 -- # killprocess 102126 00:26:41.871 08:17:53 -- common/autotest_common.sh@936 -- # '[' -z 102126 ']' 00:26:41.871 08:17:53 -- common/autotest_common.sh@940 -- # kill -0 102126 00:26:41.871 08:17:53 -- common/autotest_common.sh@941 -- # uname 00:26:41.871 08:17:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:41.871 08:17:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 102126 00:26:42.130 08:17:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:42.130 08:17:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:42.130 killing process with pid 102126 00:26:42.130 08:17:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 102126' 00:26:42.130 08:17:53 -- common/autotest_common.sh@955 -- # kill 102126 00:26:42.131 08:17:53 -- common/autotest_common.sh@960 -- # wait 102126 00:26:42.131 08:17:53 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:42.131 08:17:53 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:42.699 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:42.699 Waiting for block devices as requested 00:26:42.699 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:42.699 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:42.699 08:17:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:42.699 08:17:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:42.699 08:17:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:42.699 08:17:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:42.699 08:17:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.699 08:17:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:42.699 08:17:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.958 08:17:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:42.958 00:26:42.958 real 1m0.134s 00:26:42.958 user 3m53.426s 00:26:42.958 sys 0m13.148s 00:26:42.958 08:17:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:42.958 ************************************ 00:26:42.958 END TEST nvmf_dif 00:26:42.958 08:17:53 -- common/autotest_common.sh@10 -- # set +x 00:26:42.958 ************************************ 00:26:42.958 08:17:54 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:42.958 08:17:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:42.958 08:17:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:42.958 08:17:54 -- common/autotest_common.sh@10 -- # set +x 00:26:42.958 ************************************ 00:26:42.958 START TEST nvmf_abort_qd_sizes 00:26:42.958 ************************************ 00:26:42.958 08:17:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:42.958 * Looking for test storage... 00:26:42.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:42.958 08:17:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:42.958 08:17:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:42.958 08:17:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:42.958 08:17:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:42.958 08:17:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:42.958 08:17:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:42.958 08:17:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:42.958 08:17:54 -- scripts/common.sh@335 -- # IFS=.-: 00:26:42.958 08:17:54 -- scripts/common.sh@335 -- # read -ra ver1 00:26:42.958 08:17:54 -- scripts/common.sh@336 -- # IFS=.-: 00:26:42.958 08:17:54 -- scripts/common.sh@336 -- # read -ra ver2 00:26:42.958 08:17:54 -- scripts/common.sh@337 -- # local 'op=<' 00:26:42.958 08:17:54 -- scripts/common.sh@339 -- # ver1_l=2 00:26:42.958 08:17:54 -- scripts/common.sh@340 -- # ver2_l=1 00:26:42.958 08:17:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:42.958 08:17:54 -- scripts/common.sh@343 -- # case "$op" in 00:26:42.958 08:17:54 -- scripts/common.sh@344 -- # : 1 00:26:42.958 08:17:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:42.958 08:17:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:42.958 08:17:54 -- scripts/common.sh@364 -- # decimal 1 00:26:42.958 08:17:54 -- scripts/common.sh@352 -- # local d=1 00:26:42.958 08:17:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:42.958 08:17:54 -- scripts/common.sh@354 -- # echo 1 00:26:42.958 08:17:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:43.218 08:17:54 -- scripts/common.sh@365 -- # decimal 2 00:26:43.218 08:17:54 -- scripts/common.sh@352 -- # local d=2 00:26:43.218 08:17:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:43.218 08:17:54 -- scripts/common.sh@354 -- # echo 2 00:26:43.218 08:17:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:43.218 08:17:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:43.218 08:17:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:43.218 08:17:54 -- scripts/common.sh@367 -- # return 0 00:26:43.218 08:17:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:43.218 08:17:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:43.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.218 --rc genhtml_branch_coverage=1 00:26:43.218 --rc genhtml_function_coverage=1 00:26:43.218 --rc genhtml_legend=1 00:26:43.218 --rc geninfo_all_blocks=1 00:26:43.218 --rc geninfo_unexecuted_blocks=1 00:26:43.218 00:26:43.218 ' 00:26:43.218 08:17:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:43.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.218 --rc genhtml_branch_coverage=1 00:26:43.218 --rc genhtml_function_coverage=1 00:26:43.218 --rc genhtml_legend=1 00:26:43.218 --rc geninfo_all_blocks=1 00:26:43.218 --rc geninfo_unexecuted_blocks=1 00:26:43.218 00:26:43.218 ' 00:26:43.218 08:17:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:43.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.218 --rc genhtml_branch_coverage=1 00:26:43.218 --rc genhtml_function_coverage=1 00:26:43.218 --rc genhtml_legend=1 00:26:43.218 --rc geninfo_all_blocks=1 00:26:43.218 --rc geninfo_unexecuted_blocks=1 00:26:43.218 00:26:43.218 ' 00:26:43.218 08:17:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:43.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:43.218 --rc genhtml_branch_coverage=1 00:26:43.218 --rc genhtml_function_coverage=1 00:26:43.218 --rc genhtml_legend=1 00:26:43.218 --rc geninfo_all_blocks=1 00:26:43.218 --rc geninfo_unexecuted_blocks=1 00:26:43.218 00:26:43.218 ' 00:26:43.218 08:17:54 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:43.218 08:17:54 -- nvmf/common.sh@7 -- # uname -s 00:26:43.218 08:17:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:43.218 08:17:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:43.218 08:17:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:43.218 08:17:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:43.218 08:17:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:43.218 08:17:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:43.218 08:17:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:43.218 08:17:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:43.218 08:17:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:43.218 08:17:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:43.218 08:17:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:26:43.218 08:17:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=eb673a70-3a3d-4301-872c-26c9ce6fa6ec 00:26:43.218 08:17:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:43.218 08:17:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:43.218 08:17:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:43.218 08:17:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:43.218 08:17:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:43.218 08:17:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:43.218 08:17:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:43.218 08:17:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.218 08:17:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.218 08:17:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.218 08:17:54 -- paths/export.sh@5 -- # export PATH 00:26:43.218 08:17:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:43.218 08:17:54 -- nvmf/common.sh@46 -- # : 0 00:26:43.218 08:17:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:43.218 08:17:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:43.218 08:17:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:43.218 08:17:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:43.219 08:17:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:43.219 08:17:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:43.219 08:17:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:43.219 08:17:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:43.219 08:17:54 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:43.219 08:17:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:43.219 08:17:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:43.219 08:17:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:43.219 08:17:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:43.219 08:17:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:43.219 08:17:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.219 08:17:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:43.219 08:17:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.219 08:17:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:43.219 08:17:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:43.219 08:17:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:43.219 08:17:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:43.219 08:17:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:43.219 08:17:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:43.219 08:17:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:43.219 08:17:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:43.219 08:17:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:43.219 08:17:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:43.219 08:17:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:43.219 08:17:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:43.219 08:17:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:43.219 08:17:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:43.219 08:17:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:43.219 08:17:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:43.219 08:17:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:43.219 08:17:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:43.219 08:17:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:43.219 08:17:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:43.219 Cannot find device "nvmf_tgt_br" 00:26:43.219 08:17:54 -- nvmf/common.sh@154 -- # true 00:26:43.219 08:17:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:43.219 Cannot find device "nvmf_tgt_br2" 00:26:43.219 08:17:54 -- nvmf/common.sh@155 -- # true 00:26:43.219 08:17:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:43.219 08:17:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:43.219 Cannot find device "nvmf_tgt_br" 00:26:43.219 08:17:54 -- nvmf/common.sh@157 -- # true 00:26:43.219 08:17:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:43.219 Cannot find device "nvmf_tgt_br2" 00:26:43.219 08:17:54 -- nvmf/common.sh@158 -- # true 00:26:43.219 08:17:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:43.219 08:17:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:43.219 08:17:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:43.219 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:43.219 08:17:54 -- nvmf/common.sh@161 -- # true 00:26:43.219 08:17:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:43.219 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:43.219 08:17:54 -- nvmf/common.sh@162 -- # true 00:26:43.219 08:17:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:43.219 08:17:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:43.219 08:17:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:43.219 08:17:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:43.219 08:17:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:43.219 08:17:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:43.219 08:17:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:43.219 08:17:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:43.219 08:17:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:43.219 08:17:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:43.483 08:17:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:43.483 08:17:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:43.483 08:17:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:43.483 08:17:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:43.483 08:17:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:43.483 08:17:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:43.483 08:17:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:43.483 08:17:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:43.483 08:17:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:43.483 08:17:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:43.483 08:17:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:43.483 08:17:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:43.483 08:17:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:43.483 08:17:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:43.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:43.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:26:43.483 00:26:43.483 --- 10.0.0.2 ping statistics --- 00:26:43.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.483 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:26:43.483 08:17:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:43.483 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:43.483 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:26:43.483 00:26:43.483 --- 10.0.0.3 ping statistics --- 00:26:43.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.483 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:26:43.483 08:17:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:43.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:43.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:26:43.483 00:26:43.483 --- 10.0.0.1 ping statistics --- 00:26:43.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.483 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:26:43.483 08:17:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:43.483 08:17:54 -- nvmf/common.sh@421 -- # return 0 00:26:43.483 08:17:54 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:43.483 08:17:54 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:44.049 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:44.307 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:44.307 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:44.307 08:17:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.307 08:17:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:44.307 08:17:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:44.307 08:17:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.307 08:17:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:44.307 08:17:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:44.307 08:17:55 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:44.307 08:17:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:44.307 08:17:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:44.307 08:17:55 -- common/autotest_common.sh@10 -- # set +x 00:26:44.307 08:17:55 -- nvmf/common.sh@469 -- # nvmfpid=103498 00:26:44.307 08:17:55 -- nvmf/common.sh@470 -- # waitforlisten 103498 00:26:44.307 08:17:55 -- common/autotest_common.sh@829 -- # '[' -z 103498 ']' 00:26:44.307 08:17:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:44.307 08:17:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.307 08:17:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:44.307 08:17:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.307 08:17:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:44.307 08:17:55 -- common/autotest_common.sh@10 -- # set +x 00:26:44.566 [2024-12-07 08:17:55.590833] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:44.566 [2024-12-07 08:17:55.590936] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.566 [2024-12-07 08:17:55.735327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:44.566 [2024-12-07 08:17:55.811259] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:44.566 [2024-12-07 08:17:55.811701] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.566 [2024-12-07 08:17:55.811874] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.566 [2024-12-07 08:17:55.812036] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.566 [2024-12-07 08:17:55.812396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.566 [2024-12-07 08:17:55.812535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:44.566 [2024-12-07 08:17:55.813076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:44.566 [2024-12-07 08:17:55.813087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.504 08:17:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:45.504 08:17:56 -- common/autotest_common.sh@862 -- # return 0 00:26:45.504 08:17:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:45.504 08:17:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:45.504 08:17:56 -- common/autotest_common.sh@10 -- # set +x 00:26:45.504 08:17:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.504 08:17:56 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:45.504 08:17:56 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:45.504 08:17:56 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:45.504 08:17:56 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:45.504 08:17:56 -- scripts/common.sh@312 -- # local nvmes 00:26:45.504 08:17:56 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:45.505 08:17:56 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:45.505 08:17:56 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:45.505 08:17:56 -- scripts/common.sh@297 -- # local bdf= 00:26:45.505 08:17:56 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:45.505 08:17:56 -- scripts/common.sh@232 -- # local class 00:26:45.505 08:17:56 -- scripts/common.sh@233 -- # local subclass 00:26:45.505 08:17:56 -- scripts/common.sh@234 -- # local progif 00:26:45.505 08:17:56 -- scripts/common.sh@235 -- # printf %02x 1 00:26:45.505 08:17:56 -- scripts/common.sh@235 -- # class=01 00:26:45.505 08:17:56 -- scripts/common.sh@236 -- # printf %02x 8 00:26:45.505 08:17:56 -- scripts/common.sh@236 -- # subclass=08 00:26:45.505 08:17:56 -- scripts/common.sh@237 -- # printf %02x 2 00:26:45.505 08:17:56 -- scripts/common.sh@237 -- # progif=02 00:26:45.505 08:17:56 -- scripts/common.sh@239 -- # hash lspci 00:26:45.505 08:17:56 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:45.505 08:17:56 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:45.505 08:17:56 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:45.505 08:17:56 -- scripts/common.sh@244 -- # tr -d '"' 00:26:45.505 08:17:56 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:45.505 08:17:56 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:45.505 08:17:56 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:45.505 08:17:56 -- scripts/common.sh@15 -- # local i 00:26:45.505 08:17:56 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:45.505 08:17:56 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:45.505 08:17:56 -- scripts/common.sh@24 -- # return 0 00:26:45.505 08:17:56 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:45.505 08:17:56 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:45.505 08:17:56 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:45.505 08:17:56 -- scripts/common.sh@15 -- # local i 00:26:45.505 08:17:56 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:45.505 08:17:56 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:45.505 08:17:56 -- scripts/common.sh@24 -- # return 0 00:26:45.505 08:17:56 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:45.505 08:17:56 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:45.505 08:17:56 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:45.505 08:17:56 -- scripts/common.sh@322 -- # uname -s 00:26:45.505 08:17:56 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:45.505 08:17:56 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:45.505 08:17:56 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:45.505 08:17:56 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:45.505 08:17:56 -- scripts/common.sh@322 -- # uname -s 00:26:45.505 08:17:56 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:45.505 08:17:56 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:45.505 08:17:56 -- scripts/common.sh@327 -- # (( 2 )) 00:26:45.505 08:17:56 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:45.505 08:17:56 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:45.505 08:17:56 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:45.505 08:17:56 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:45.505 08:17:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:45.505 08:17:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:45.505 08:17:56 -- common/autotest_common.sh@10 -- # set +x 00:26:45.505 ************************************ 00:26:45.505 START TEST spdk_target_abort 00:26:45.505 ************************************ 00:26:45.505 08:17:56 -- common/autotest_common.sh@1114 -- # spdk_target 00:26:45.505 08:17:56 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:45.505 08:17:56 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:45.505 08:17:56 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:45.505 08:17:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.505 08:17:56 -- common/autotest_common.sh@10 -- # set +x 00:26:45.764 spdk_targetn1 00:26:45.764 08:17:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:45.764 08:17:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.764 08:17:56 -- common/autotest_common.sh@10 -- # set +x 00:26:45.764 [2024-12-07 08:17:56.788130] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.764 08:17:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:45.764 08:17:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.764 08:17:56 -- common/autotest_common.sh@10 -- # set +x 00:26:45.764 08:17:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:45.764 08:17:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.764 08:17:56 -- common/autotest_common.sh@10 -- # set +x 00:26:45.764 08:17:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:45.764 08:17:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.764 08:17:56 -- common/autotest_common.sh@10 -- # set +x 00:26:45.764 [2024-12-07 08:17:56.820342] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.764 08:17:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:45.764 08:17:56 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:49.046 Initializing NVMe Controllers 00:26:49.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:49.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:49.046 Initialization complete. Launching workers. 00:26:49.046 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10127, failed: 0 00:26:49.046 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1071, failed to submit 9056 00:26:49.046 success 800, unsuccess 271, failed 0 00:26:49.046 08:18:00 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:49.046 08:18:00 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:52.335 [2024-12-07 08:18:03.290298] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e45c0 is same with the state(5) to be set 00:26:52.335 [2024-12-07 08:18:03.290353] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e45c0 is same with the state(5) to be set 00:26:52.335 [2024-12-07 08:18:03.290363] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e45c0 is same with the state(5) to be set 00:26:52.335 Initializing NVMe Controllers 00:26:52.335 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:52.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:52.335 Initialization complete. Launching workers. 00:26:52.335 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5928, failed: 0 00:26:52.335 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1239, failed to submit 4689 00:26:52.335 success 254, unsuccess 985, failed 0 00:26:52.335 08:18:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:52.335 08:18:03 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:55.627 Initializing NVMe Controllers 00:26:55.627 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:55.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:55.627 Initialization complete. Launching workers. 00:26:55.627 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31541, failed: 0 00:26:55.627 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2642, failed to submit 28899 00:26:55.627 success 516, unsuccess 2126, failed 0 00:26:55.627 08:18:06 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:26:55.627 08:18:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.627 08:18:06 -- common/autotest_common.sh@10 -- # set +x 00:26:55.627 08:18:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.627 08:18:06 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:55.627 08:18:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.627 08:18:06 -- common/autotest_common.sh@10 -- # set +x 00:26:55.887 08:18:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.887 08:18:06 -- target/abort_qd_sizes.sh@62 -- # killprocess 103498 00:26:55.887 08:18:06 -- common/autotest_common.sh@936 -- # '[' -z 103498 ']' 00:26:55.887 08:18:06 -- common/autotest_common.sh@940 -- # kill -0 103498 00:26:55.887 08:18:06 -- common/autotest_common.sh@941 -- # uname 00:26:55.887 08:18:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:55.887 08:18:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 103498 00:26:55.887 08:18:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:55.887 08:18:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:55.887 killing process with pid 103498 00:26:55.887 08:18:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 103498' 00:26:55.887 08:18:07 -- common/autotest_common.sh@955 -- # kill 103498 00:26:55.887 08:18:07 -- common/autotest_common.sh@960 -- # wait 103498 00:26:56.147 00:26:56.147 real 0m10.542s 00:26:56.147 user 0m43.084s 00:26:56.147 sys 0m1.886s 00:26:56.147 08:18:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:56.147 ************************************ 00:26:56.147 08:18:07 -- common/autotest_common.sh@10 -- # set +x 00:26:56.147 END TEST spdk_target_abort 00:26:56.147 ************************************ 00:26:56.147 08:18:07 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:26:56.147 08:18:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:56.147 08:18:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:56.147 08:18:07 -- common/autotest_common.sh@10 -- # set +x 00:26:56.147 ************************************ 00:26:56.147 START TEST kernel_target_abort 00:26:56.147 ************************************ 00:26:56.147 08:18:07 -- common/autotest_common.sh@1114 -- # kernel_target 00:26:56.147 08:18:07 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:26:56.147 08:18:07 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:26:56.147 08:18:07 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:26:56.147 08:18:07 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:26:56.147 08:18:07 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:26:56.147 08:18:07 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:56.147 08:18:07 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:56.147 08:18:07 -- nvmf/common.sh@627 -- # local block nvme 00:26:56.147 08:18:07 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:26:56.147 08:18:07 -- nvmf/common.sh@630 -- # modprobe nvmet 00:26:56.147 08:18:07 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:56.147 08:18:07 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:56.406 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:56.665 Waiting for block devices as requested 00:26:56.665 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:56.665 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:56.665 08:18:07 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:56.665 08:18:07 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:56.665 08:18:07 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:26:56.665 08:18:07 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:26:56.665 08:18:07 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:56.924 No valid GPT data, bailing 00:26:56.924 08:18:07 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:56.924 08:18:07 -- scripts/common.sh@393 -- # pt= 00:26:56.924 08:18:07 -- scripts/common.sh@394 -- # return 1 00:26:56.924 08:18:07 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:26:56.924 08:18:07 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:56.924 08:18:07 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:56.924 08:18:07 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:26:56.924 08:18:07 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:26:56.924 08:18:07 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:56.924 No valid GPT data, bailing 00:26:56.924 08:18:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:56.924 08:18:08 -- scripts/common.sh@393 -- # pt= 00:26:56.924 08:18:08 -- scripts/common.sh@394 -- # return 1 00:26:56.924 08:18:08 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:26:56.924 08:18:08 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:56.924 08:18:08 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:56.924 08:18:08 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:26:56.924 08:18:08 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:26:56.924 08:18:08 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:26:56.924 No valid GPT data, bailing 00:26:56.924 08:18:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:26:56.924 08:18:08 -- scripts/common.sh@393 -- # pt= 00:26:56.924 08:18:08 -- scripts/common.sh@394 -- # return 1 00:26:56.924 08:18:08 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:26:56.924 08:18:08 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:56.924 08:18:08 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:26:56.924 08:18:08 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:26:56.924 08:18:08 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:26:56.924 08:18:08 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:26:56.924 No valid GPT data, bailing 00:26:57.183 08:18:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:26:57.183 08:18:08 -- scripts/common.sh@393 -- # pt= 00:26:57.183 08:18:08 -- scripts/common.sh@394 -- # return 1 00:26:57.183 08:18:08 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:26:57.183 08:18:08 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:26:57.183 08:18:08 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:57.183 08:18:08 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:57.183 08:18:08 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:57.183 08:18:08 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:26:57.183 08:18:08 -- nvmf/common.sh@654 -- # echo 1 00:26:57.183 08:18:08 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:26:57.183 08:18:08 -- nvmf/common.sh@656 -- # echo 1 00:26:57.183 08:18:08 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:26:57.183 08:18:08 -- nvmf/common.sh@663 -- # echo tcp 00:26:57.183 08:18:08 -- nvmf/common.sh@664 -- # echo 4420 00:26:57.183 08:18:08 -- nvmf/common.sh@665 -- # echo ipv4 00:26:57.183 08:18:08 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:57.183 08:18:08 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:eb673a70-3a3d-4301-872c-26c9ce6fa6ec --hostid=eb673a70-3a3d-4301-872c-26c9ce6fa6ec -a 10.0.0.1 -t tcp -s 4420 00:26:57.183 00:26:57.183 Discovery Log Number of Records 2, Generation counter 2 00:26:57.183 =====Discovery Log Entry 0====== 00:26:57.183 trtype: tcp 00:26:57.183 adrfam: ipv4 00:26:57.183 subtype: current discovery subsystem 00:26:57.183 treq: not specified, sq flow control disable supported 00:26:57.183 portid: 1 00:26:57.183 trsvcid: 4420 00:26:57.183 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:57.183 traddr: 10.0.0.1 00:26:57.183 eflags: none 00:26:57.183 sectype: none 00:26:57.183 =====Discovery Log Entry 1====== 00:26:57.183 trtype: tcp 00:26:57.183 adrfam: ipv4 00:26:57.183 subtype: nvme subsystem 00:26:57.183 treq: not specified, sq flow control disable supported 00:26:57.183 portid: 1 00:26:57.183 trsvcid: 4420 00:26:57.183 subnqn: kernel_target 00:26:57.183 traddr: 10.0.0.1 00:26:57.183 eflags: none 00:26:57.183 sectype: none 00:26:57.183 08:18:08 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:26:57.183 08:18:08 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:57.183 08:18:08 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:57.183 08:18:08 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:57.183 08:18:08 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:57.183 08:18:08 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:26:57.183 08:18:08 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:57.183 08:18:08 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:57.183 08:18:08 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:57.183 08:18:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:57.183 08:18:08 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:57.183 08:18:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:57.183 08:18:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:57.183 08:18:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:57.183 08:18:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:57.183 08:18:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:57.183 08:18:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:57.183 08:18:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:57.183 08:18:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:57.183 08:18:08 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:57.183 08:18:08 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:00.474 Initializing NVMe Controllers 00:27:00.474 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:00.474 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:00.474 Initialization complete. Launching workers. 00:27:00.474 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 30834, failed: 0 00:27:00.474 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 30834, failed to submit 0 00:27:00.474 success 0, unsuccess 30834, failed 0 00:27:00.474 08:18:11 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:00.474 08:18:11 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:03.761 Initializing NVMe Controllers 00:27:03.761 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:03.761 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:03.761 Initialization complete. Launching workers. 00:27:03.761 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 65773, failed: 0 00:27:03.761 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26470, failed to submit 39303 00:27:03.761 success 0, unsuccess 26470, failed 0 00:27:03.761 08:18:14 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:03.761 08:18:14 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:07.045 Initializing NVMe Controllers 00:27:07.045 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:07.045 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:07.045 Initialization complete. Launching workers. 00:27:07.045 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 70865, failed: 0 00:27:07.045 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 17686, failed to submit 53179 00:27:07.045 success 0, unsuccess 17686, failed 0 00:27:07.045 08:18:17 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:27:07.045 08:18:17 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:27:07.045 08:18:17 -- nvmf/common.sh@677 -- # echo 0 00:27:07.045 08:18:17 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:27:07.045 08:18:17 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:07.045 08:18:17 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:07.045 08:18:17 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:27:07.045 08:18:17 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:27:07.045 08:18:17 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:27:07.045 00:27:07.045 real 0m10.554s 00:27:07.045 user 0m5.241s 00:27:07.045 sys 0m2.547s 00:27:07.045 08:18:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:07.045 08:18:17 -- common/autotest_common.sh@10 -- # set +x 00:27:07.045 ************************************ 00:27:07.045 END TEST kernel_target_abort 00:27:07.045 ************************************ 00:27:07.045 08:18:17 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:27:07.045 08:18:17 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:27:07.045 08:18:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:07.045 08:18:17 -- nvmf/common.sh@116 -- # sync 00:27:07.045 08:18:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:07.045 08:18:17 -- nvmf/common.sh@119 -- # set +e 00:27:07.045 08:18:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:07.045 08:18:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:07.045 rmmod nvme_tcp 00:27:07.045 rmmod nvme_fabrics 00:27:07.045 rmmod nvme_keyring 00:27:07.045 08:18:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:07.045 08:18:17 -- nvmf/common.sh@123 -- # set -e 00:27:07.045 08:18:17 -- nvmf/common.sh@124 -- # return 0 00:27:07.045 08:18:17 -- nvmf/common.sh@477 -- # '[' -n 103498 ']' 00:27:07.045 08:18:17 -- nvmf/common.sh@478 -- # killprocess 103498 00:27:07.045 08:18:17 -- common/autotest_common.sh@936 -- # '[' -z 103498 ']' 00:27:07.045 08:18:17 -- common/autotest_common.sh@940 -- # kill -0 103498 00:27:07.045 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (103498) - No such process 00:27:07.045 Process with pid 103498 is not found 00:27:07.045 08:18:17 -- common/autotest_common.sh@963 -- # echo 'Process with pid 103498 is not found' 00:27:07.045 08:18:17 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:27:07.045 08:18:17 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:07.613 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:07.613 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:07.613 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:07.613 08:18:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:07.613 08:18:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:07.613 08:18:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:07.613 08:18:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:07.613 08:18:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.613 08:18:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:07.613 08:18:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.613 08:18:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:07.613 00:27:07.613 real 0m24.756s 00:27:07.613 user 0m49.846s 00:27:07.613 sys 0m5.842s 00:27:07.613 08:18:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:07.613 08:18:18 -- common/autotest_common.sh@10 -- # set +x 00:27:07.613 ************************************ 00:27:07.613 END TEST nvmf_abort_qd_sizes 00:27:07.613 ************************************ 00:27:07.613 08:18:18 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:27:07.613 08:18:18 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:27:07.613 08:18:18 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:27:07.613 08:18:18 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:27:07.613 08:18:18 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:27:07.614 08:18:18 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:27:07.614 08:18:18 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:27:07.614 08:18:18 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:07.614 08:18:18 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:27:07.614 08:18:18 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:07.614 08:18:18 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:07.614 08:18:18 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:27:07.614 08:18:18 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:27:07.614 08:18:18 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:07.614 08:18:18 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:27:07.614 08:18:18 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:27:07.614 08:18:18 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:27:07.614 08:18:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:07.614 08:18:18 -- common/autotest_common.sh@10 -- # set +x 00:27:07.614 08:18:18 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:27:07.614 08:18:18 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:27:07.614 08:18:18 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:27:07.614 08:18:18 -- common/autotest_common.sh@10 -- # set +x 00:27:09.513 INFO: APP EXITING 00:27:09.513 INFO: killing all VMs 00:27:09.513 INFO: killing vhost app 00:27:09.513 INFO: EXIT DONE 00:27:10.449 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:10.449 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:10.449 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:11.015 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:11.015 Cleaning 00:27:11.015 Removing: /var/run/dpdk/spdk0/config 00:27:11.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:11.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:11.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:11.015 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:11.015 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:11.015 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:11.015 Removing: /var/run/dpdk/spdk1/config 00:27:11.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:11.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:11.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:11.015 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:11.015 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:11.015 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:11.015 Removing: /var/run/dpdk/spdk2/config 00:27:11.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:11.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:11.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:11.015 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:11.015 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:11.015 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:11.015 Removing: /var/run/dpdk/spdk3/config 00:27:11.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:11.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:11.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:11.015 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:11.015 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:11.015 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:11.015 Removing: /var/run/dpdk/spdk4/config 00:27:11.015 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:11.015 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:11.015 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:11.015 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:11.273 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:11.273 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:11.273 Removing: /dev/shm/nvmf_trace.0 00:27:11.273 Removing: /dev/shm/spdk_tgt_trace.pid67587 00:27:11.273 Removing: /var/run/dpdk/spdk0 00:27:11.273 Removing: /var/run/dpdk/spdk1 00:27:11.273 Removing: /var/run/dpdk/spdk2 00:27:11.273 Removing: /var/run/dpdk/spdk3 00:27:11.273 Removing: /var/run/dpdk/spdk4 00:27:11.273 Removing: /var/run/dpdk/spdk_pid100483 00:27:11.273 Removing: /var/run/dpdk/spdk_pid100690 00:27:11.273 Removing: /var/run/dpdk/spdk_pid100975 00:27:11.273 Removing: /var/run/dpdk/spdk_pid101281 00:27:11.273 Removing: /var/run/dpdk/spdk_pid101828 00:27:11.273 Removing: /var/run/dpdk/spdk_pid101833 00:27:11.273 Removing: /var/run/dpdk/spdk_pid102201 00:27:11.273 Removing: /var/run/dpdk/spdk_pid102361 00:27:11.273 Removing: /var/run/dpdk/spdk_pid102523 00:27:11.273 Removing: /var/run/dpdk/spdk_pid102616 00:27:11.273 Removing: /var/run/dpdk/spdk_pid102778 00:27:11.273 Removing: /var/run/dpdk/spdk_pid102887 00:27:11.273 Removing: /var/run/dpdk/spdk_pid103568 00:27:11.273 Removing: /var/run/dpdk/spdk_pid103603 00:27:11.273 Removing: /var/run/dpdk/spdk_pid103639 00:27:11.273 Removing: /var/run/dpdk/spdk_pid103888 00:27:11.273 Removing: /var/run/dpdk/spdk_pid103923 00:27:11.273 Removing: /var/run/dpdk/spdk_pid103956 00:27:11.273 Removing: /var/run/dpdk/spdk_pid67435 00:27:11.273 Removing: /var/run/dpdk/spdk_pid67587 00:27:11.273 Removing: /var/run/dpdk/spdk_pid67914 00:27:11.273 Removing: /var/run/dpdk/spdk_pid68183 00:27:11.273 Removing: /var/run/dpdk/spdk_pid68366 00:27:11.273 Removing: /var/run/dpdk/spdk_pid68455 00:27:11.273 Removing: /var/run/dpdk/spdk_pid68554 00:27:11.273 Removing: /var/run/dpdk/spdk_pid68645 00:27:11.273 Removing: /var/run/dpdk/spdk_pid68689 00:27:11.273 Removing: /var/run/dpdk/spdk_pid68719 00:27:11.273 Removing: /var/run/dpdk/spdk_pid68782 00:27:11.273 Removing: /var/run/dpdk/spdk_pid68906 00:27:11.273 Removing: /var/run/dpdk/spdk_pid69539 00:27:11.273 Removing: /var/run/dpdk/spdk_pid69603 00:27:11.273 Removing: /var/run/dpdk/spdk_pid69672 00:27:11.273 Removing: /var/run/dpdk/spdk_pid69700 00:27:11.273 Removing: /var/run/dpdk/spdk_pid69772 00:27:11.273 Removing: /var/run/dpdk/spdk_pid69800 00:27:11.273 Removing: /var/run/dpdk/spdk_pid69875 00:27:11.273 Removing: /var/run/dpdk/spdk_pid69903 00:27:11.273 Removing: /var/run/dpdk/spdk_pid69960 00:27:11.273 Removing: /var/run/dpdk/spdk_pid69992 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70038 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70068 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70227 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70257 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70341 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70409 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70438 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70492 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70512 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70546 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70560 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70600 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70616 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70651 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70670 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70699 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70721 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70755 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70775 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70809 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70829 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70858 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70877 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70912 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70926 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70966 00:27:11.273 Removing: /var/run/dpdk/spdk_pid70980 00:27:11.273 Removing: /var/run/dpdk/spdk_pid71014 00:27:11.273 Removing: /var/run/dpdk/spdk_pid71034 00:27:11.532 Removing: /var/run/dpdk/spdk_pid71063 00:27:11.532 Removing: /var/run/dpdk/spdk_pid71088 00:27:11.532 Removing: /var/run/dpdk/spdk_pid71117 00:27:11.532 Removing: /var/run/dpdk/spdk_pid71131 00:27:11.532 Removing: /var/run/dpdk/spdk_pid71171 00:27:11.532 Removing: /var/run/dpdk/spdk_pid71185 00:27:11.532 Removing: /var/run/dpdk/spdk_pid71225 00:27:11.532 Removing: /var/run/dpdk/spdk_pid71239 00:27:11.532 Removing: /var/run/dpdk/spdk_pid71274 00:27:11.532 Removing: /var/run/dpdk/spdk_pid71293 00:27:11.532 Removing: /var/run/dpdk/spdk_pid71322 00:27:11.532 Removing: /var/run/dpdk/spdk_pid71345 00:27:11.532 Removing: /var/run/dpdk/spdk_pid71382 00:27:11.532 Removing: /var/run/dpdk/spdk_pid71405 00:27:11.532 Removing: /var/run/dpdk/spdk_pid71442 00:27:11.532 Removing: /var/run/dpdk/spdk_pid71456 00:27:11.532 Removing: /var/run/dpdk/spdk_pid71491 00:27:11.532 Removing: /var/run/dpdk/spdk_pid71510 00:27:11.532 Removing: /var/run/dpdk/spdk_pid71546 00:27:11.532 Removing: /var/run/dpdk/spdk_pid71623 00:27:11.532 Removing: /var/run/dpdk/spdk_pid71741 00:27:11.532 Removing: /var/run/dpdk/spdk_pid72175 00:27:11.532 Removing: /var/run/dpdk/spdk_pid79142 00:27:11.532 Removing: /var/run/dpdk/spdk_pid79490 00:27:11.532 Removing: /var/run/dpdk/spdk_pid81923 00:27:11.532 Removing: /var/run/dpdk/spdk_pid82307 00:27:11.532 Removing: /var/run/dpdk/spdk_pid82552 00:27:11.532 Removing: /var/run/dpdk/spdk_pid82598 00:27:11.532 Removing: /var/run/dpdk/spdk_pid82917 00:27:11.532 Removing: /var/run/dpdk/spdk_pid82967 00:27:11.532 Removing: /var/run/dpdk/spdk_pid83351 00:27:11.532 Removing: /var/run/dpdk/spdk_pid83883 00:27:11.532 Removing: /var/run/dpdk/spdk_pid84325 00:27:11.532 Removing: /var/run/dpdk/spdk_pid85274 00:27:11.532 Removing: /var/run/dpdk/spdk_pid86267 00:27:11.532 Removing: /var/run/dpdk/spdk_pid86385 00:27:11.532 Removing: /var/run/dpdk/spdk_pid86447 00:27:11.532 Removing: /var/run/dpdk/spdk_pid87937 00:27:11.532 Removing: /var/run/dpdk/spdk_pid88178 00:27:11.532 Removing: /var/run/dpdk/spdk_pid88628 00:27:11.532 Removing: /var/run/dpdk/spdk_pid88742 00:27:11.532 Removing: /var/run/dpdk/spdk_pid88898 00:27:11.532 Removing: /var/run/dpdk/spdk_pid88939 00:27:11.532 Removing: /var/run/dpdk/spdk_pid88985 00:27:11.532 Removing: /var/run/dpdk/spdk_pid89025 00:27:11.532 Removing: /var/run/dpdk/spdk_pid89194 00:27:11.532 Removing: /var/run/dpdk/spdk_pid89341 00:27:11.532 Removing: /var/run/dpdk/spdk_pid89609 00:27:11.532 Removing: /var/run/dpdk/spdk_pid89726 00:27:11.532 Removing: /var/run/dpdk/spdk_pid90148 00:27:11.532 Removing: /var/run/dpdk/spdk_pid90539 00:27:11.532 Removing: /var/run/dpdk/spdk_pid90541 00:27:11.532 Removing: /var/run/dpdk/spdk_pid92803 00:27:11.532 Removing: /var/run/dpdk/spdk_pid93113 00:27:11.532 Removing: /var/run/dpdk/spdk_pid93627 00:27:11.532 Removing: /var/run/dpdk/spdk_pid93630 00:27:11.532 Removing: /var/run/dpdk/spdk_pid93981 00:27:11.532 Removing: /var/run/dpdk/spdk_pid93995 00:27:11.532 Removing: /var/run/dpdk/spdk_pid94015 00:27:11.532 Removing: /var/run/dpdk/spdk_pid94040 00:27:11.532 Removing: /var/run/dpdk/spdk_pid94051 00:27:11.532 Removing: /var/run/dpdk/spdk_pid94199 00:27:11.532 Removing: /var/run/dpdk/spdk_pid94201 00:27:11.532 Removing: /var/run/dpdk/spdk_pid94309 00:27:11.532 Removing: /var/run/dpdk/spdk_pid94311 00:27:11.532 Removing: /var/run/dpdk/spdk_pid94425 00:27:11.532 Removing: /var/run/dpdk/spdk_pid94427 00:27:11.532 Removing: /var/run/dpdk/spdk_pid94899 00:27:11.532 Removing: /var/run/dpdk/spdk_pid94948 00:27:11.532 Removing: /var/run/dpdk/spdk_pid95099 00:27:11.532 Removing: /var/run/dpdk/spdk_pid95225 00:27:11.532 Removing: /var/run/dpdk/spdk_pid95625 00:27:11.532 Removing: /var/run/dpdk/spdk_pid95880 00:27:11.532 Removing: /var/run/dpdk/spdk_pid96383 00:27:11.532 Removing: /var/run/dpdk/spdk_pid96939 00:27:11.532 Removing: /var/run/dpdk/spdk_pid97404 00:27:11.532 Removing: /var/run/dpdk/spdk_pid97494 00:27:11.532 Removing: /var/run/dpdk/spdk_pid97570 00:27:11.532 Removing: /var/run/dpdk/spdk_pid97643 00:27:11.791 Removing: /var/run/dpdk/spdk_pid97802 00:27:11.791 Removing: /var/run/dpdk/spdk_pid97891 00:27:11.791 Removing: /var/run/dpdk/spdk_pid97980 00:27:11.791 Removing: /var/run/dpdk/spdk_pid98069 00:27:11.791 Removing: /var/run/dpdk/spdk_pid98413 00:27:11.791 Removing: /var/run/dpdk/spdk_pid99118 00:27:11.791 Clean 00:27:11.791 killing process with pid 61831 00:27:11.791 killing process with pid 61832 00:27:11.791 08:18:22 -- common/autotest_common.sh@1446 -- # return 0 00:27:11.791 08:18:22 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:27:11.791 08:18:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:11.791 08:18:22 -- common/autotest_common.sh@10 -- # set +x 00:27:11.791 08:18:22 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:27:11.791 08:18:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:11.791 08:18:22 -- common/autotest_common.sh@10 -- # set +x 00:27:11.791 08:18:23 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:11.791 08:18:23 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:11.791 08:18:23 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:11.791 08:18:23 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:27:11.791 08:18:23 -- spdk/autotest.sh@383 -- # hostname 00:27:11.791 08:18:23 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:12.049 geninfo: WARNING: invalid characters removed from testname! 00:27:33.982 08:18:42 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:34.548 08:18:45 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:37.084 08:18:47 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:38.992 08:18:50 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:40.898 08:18:52 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:43.431 08:18:54 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:45.331 08:18:56 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:45.331 08:18:56 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:27:45.331 08:18:56 -- common/autotest_common.sh@1690 -- $ lcov --version 00:27:45.331 08:18:56 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:27:45.331 08:18:56 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:27:45.331 08:18:56 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:27:45.331 08:18:56 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:27:45.331 08:18:56 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:27:45.331 08:18:56 -- scripts/common.sh@335 -- $ IFS=.-: 00:27:45.331 08:18:56 -- scripts/common.sh@335 -- $ read -ra ver1 00:27:45.331 08:18:56 -- scripts/common.sh@336 -- $ IFS=.-: 00:27:45.331 08:18:56 -- scripts/common.sh@336 -- $ read -ra ver2 00:27:45.331 08:18:56 -- scripts/common.sh@337 -- $ local 'op=<' 00:27:45.331 08:18:56 -- scripts/common.sh@339 -- $ ver1_l=2 00:27:45.331 08:18:56 -- scripts/common.sh@340 -- $ ver2_l=1 00:27:45.331 08:18:56 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:27:45.331 08:18:56 -- scripts/common.sh@343 -- $ case "$op" in 00:27:45.331 08:18:56 -- scripts/common.sh@344 -- $ : 1 00:27:45.331 08:18:56 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:27:45.331 08:18:56 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:45.331 08:18:56 -- scripts/common.sh@364 -- $ decimal 1 00:27:45.331 08:18:56 -- scripts/common.sh@352 -- $ local d=1 00:27:45.331 08:18:56 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:27:45.331 08:18:56 -- scripts/common.sh@354 -- $ echo 1 00:27:45.331 08:18:56 -- scripts/common.sh@364 -- $ ver1[v]=1 00:27:45.331 08:18:56 -- scripts/common.sh@365 -- $ decimal 2 00:27:45.331 08:18:56 -- scripts/common.sh@352 -- $ local d=2 00:27:45.331 08:18:56 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:27:45.331 08:18:56 -- scripts/common.sh@354 -- $ echo 2 00:27:45.331 08:18:56 -- scripts/common.sh@365 -- $ ver2[v]=2 00:27:45.331 08:18:56 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:27:45.331 08:18:56 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:27:45.331 08:18:56 -- scripts/common.sh@367 -- $ return 0 00:27:45.331 08:18:56 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:45.331 08:18:56 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:27:45.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.331 --rc genhtml_branch_coverage=1 00:27:45.331 --rc genhtml_function_coverage=1 00:27:45.332 --rc genhtml_legend=1 00:27:45.332 --rc geninfo_all_blocks=1 00:27:45.332 --rc geninfo_unexecuted_blocks=1 00:27:45.332 00:27:45.332 ' 00:27:45.332 08:18:56 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:27:45.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.332 --rc genhtml_branch_coverage=1 00:27:45.332 --rc genhtml_function_coverage=1 00:27:45.332 --rc genhtml_legend=1 00:27:45.332 --rc geninfo_all_blocks=1 00:27:45.332 --rc geninfo_unexecuted_blocks=1 00:27:45.332 00:27:45.332 ' 00:27:45.332 08:18:56 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:27:45.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.332 --rc genhtml_branch_coverage=1 00:27:45.332 --rc genhtml_function_coverage=1 00:27:45.332 --rc genhtml_legend=1 00:27:45.332 --rc geninfo_all_blocks=1 00:27:45.332 --rc geninfo_unexecuted_blocks=1 00:27:45.332 00:27:45.332 ' 00:27:45.332 08:18:56 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:27:45.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.332 --rc genhtml_branch_coverage=1 00:27:45.332 --rc genhtml_function_coverage=1 00:27:45.332 --rc genhtml_legend=1 00:27:45.332 --rc geninfo_all_blocks=1 00:27:45.332 --rc geninfo_unexecuted_blocks=1 00:27:45.332 00:27:45.332 ' 00:27:45.332 08:18:56 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:45.332 08:18:56 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:45.332 08:18:56 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.332 08:18:56 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.332 08:18:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.332 08:18:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.332 08:18:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.332 08:18:56 -- paths/export.sh@5 -- $ export PATH 00:27:45.332 08:18:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.332 08:18:56 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:45.332 08:18:56 -- common/autobuild_common.sh@440 -- $ date +%s 00:27:45.332 08:18:56 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733559536.XXXXXX 00:27:45.332 08:18:56 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733559536.DfELAA 00:27:45.332 08:18:56 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:27:45.332 08:18:56 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:27:45.332 08:18:56 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:45.332 08:18:56 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:27:45.332 08:18:56 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:45.332 08:18:56 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:45.332 08:18:56 -- common/autobuild_common.sh@456 -- $ get_config_params 00:27:45.332 08:18:56 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:27:45.332 08:18:56 -- common/autotest_common.sh@10 -- $ set +x 00:27:45.332 08:18:56 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:27:45.332 08:18:56 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:45.332 08:18:56 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:45.332 08:18:56 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:45.332 08:18:56 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:45.332 08:18:56 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:45.332 08:18:56 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:45.332 08:18:56 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:45.332 08:18:56 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:45.332 08:18:56 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:45.332 08:18:56 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:45.332 + [[ -n 5967 ]] 00:27:45.332 + sudo kill 5967 00:27:45.341 [Pipeline] } 00:27:45.357 [Pipeline] // timeout 00:27:45.362 [Pipeline] } 00:27:45.377 [Pipeline] // stage 00:27:45.382 [Pipeline] } 00:27:45.397 [Pipeline] // catchError 00:27:45.407 [Pipeline] stage 00:27:45.409 [Pipeline] { (Stop VM) 00:27:45.421 [Pipeline] sh 00:27:45.702 + vagrant halt 00:27:48.229 ==> default: Halting domain... 00:27:54.921 [Pipeline] sh 00:27:55.198 + vagrant destroy -f 00:27:57.731 ==> default: Removing domain... 00:27:58.004 [Pipeline] sh 00:27:58.286 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:27:58.295 [Pipeline] } 00:27:58.311 [Pipeline] // stage 00:27:58.316 [Pipeline] } 00:27:58.332 [Pipeline] // dir 00:27:58.338 [Pipeline] } 00:27:58.356 [Pipeline] // wrap 00:27:58.363 [Pipeline] } 00:27:58.377 [Pipeline] // catchError 00:27:58.387 [Pipeline] stage 00:27:58.389 [Pipeline] { (Epilogue) 00:27:58.403 [Pipeline] sh 00:27:58.685 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:02.904 [Pipeline] catchError 00:28:02.906 [Pipeline] { 00:28:02.916 [Pipeline] sh 00:28:03.193 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:03.452 Artifacts sizes are good 00:28:03.462 [Pipeline] } 00:28:03.478 [Pipeline] // catchError 00:28:03.491 [Pipeline] archiveArtifacts 00:28:03.498 Archiving artifacts 00:28:03.632 [Pipeline] cleanWs 00:28:03.647 [WS-CLEANUP] Deleting project workspace... 00:28:03.647 [WS-CLEANUP] Deferred wipeout is used... 00:28:03.654 [WS-CLEANUP] done 00:28:03.656 [Pipeline] } 00:28:03.674 [Pipeline] // stage 00:28:03.681 [Pipeline] } 00:28:03.697 [Pipeline] // node 00:28:03.703 [Pipeline] End of Pipeline 00:28:03.745 Finished: SUCCESS